With protests sparked by the killing of George Floyd by Officer Derek Chauvin erupting all over the nation, states are beginning to ask the National Guard to step in. The epicenter of these demonstrations is Minneapolis, Minnesota, where the National Guard has already been deployed to handle protests and enforce the curfew.But it's not just Minnesota. The military apparently has plans to intervene in several other states if necessary, as Ken Klippenstein reports for The Nation.
Defamation lawsuits often fail because of the high bar plaintiffs need to meet to prove defamation -- especially of a public figure. But, while there are lots of ways to lose a defamation lawsuit as a plaintiff, my favorite must certainly be the concept of a libel-proof plaintiff. This would be the notion that a plaintiff cannot be libeled or defamed if that plaintiff's reputation is so absolutely horrendous that further damage to it is impossible.
The intersection of school administration and law enforcement leads directly to insanity. All logic goes out the window when school administrators come across something that makes them feel slightly uncomfortable. Adding cops to the mix doesn't help anything. It only serves to turn every mildly misbehaving student into a criminal.We're here to talk about bombs. I'm sorry. Let me clarify. Not actual bombs. Drawings of bombs. Drawings created by students who are likely to draw bombs, guns, and general violent mayhem without actually wishing any of that on their fellow students.It took a couple of rounds in court to actually set this right. We've covered similar insanity over drawings of bombs here at Techdirt before, like the (temporarily) indefinite suspension of an autistic student who drew a bomb that looked like something straight out of a Looney Tunes short.This bomb drawing was a little more intricate but no more threatening than the round black bomb with a fuse we've all seen in any number of cartoons no one saw fit to prosecute. (h/t Ari Cohn)The Wisconsin Court of Appeals has finally ended the madness that began with terroristic threat and disorderly conduct charges being leveled against a middle school student.The decision [PDF] recounts the unfortunate chain of events that ultimately needed to be addressed by the penultimate level of the state's criminal justice system.
Once again, the people that serve the public have failed to understand the public. Trying to turn citizens into narcs never works out as well as government agencies envision. The end result is almost always a useless waste of limited resources.Eons ago when the coronavirus was still a concern, the mayor of New York City set up a snitch line for residents to report social distancing violations. Instead of hot neighbor-on-neighbor action, the city's 311 line received a bunch of middle fingers, dick pics, and Hitler memes.When Ohio's government set up a snitch line for employers to report employees who were collecting unemployment instead of coming to their COVID-encrusted workplaces, an enterprising coder put together a script that clogged the tip bin with algorithmically-generated garbage.Now that it's civil unrest all over the place in response to the latest killing of a black person by a white police officer, the Dallas PD is asking citizens to step up… and report other citizens for exercising their First Amendment rights. It has not worked out well for the police, as Caroline Haskins reports for BuzzFeed.
Over 90% of Americans feel like they have no control over their online privacy. It is not hard to understand why so many of us feel so powerless when it comes to using the Internet, nor is the solution to such a pervasive feeling all that complicated.It just boils down to rules and liability—or, in other words, making sure if a company violates your privacy per the law that there is an inescapable penalty. The clearer and more direct the path to holding a company accountable for violating your privacy—much like your physical health, property rights, emotional wellbeing, or other things held in legally enforceable trusts—the more confidence will return to the Internet marketplace.But we don't have these clear enforceable rights in today's American consumer privacy legal system for a vast majority of Internet privacy related activity. In fact, when the next Google or Facebook scandal rolls around in terms of user privacy, think back to the last one—likely just a few months old—and ask how much in damages the company paid and whether the company had to compensate individual people for the violation.In many cases, that answer is going to be no penalty, which then feeds into users' sense of powerlessness. But the fact that companies often have to pay no penalty, and the fact that we do not have laws in place to remedy these privacy harms, is a choice we have made. It is not the natural order of things, and it is not inevitable.We have, as a society, made decisions under our intellectual property laws where absolutely no liability is allowed to promote another profitless value, namely our freedom of expression. For example, the practice of criticizing a film on YouTube while playing portions of it in the background is considered a fair use. This means, despite copyright holders having the exclusive rights over the public performance of their work, we have decided to extinguish liability when it involves the expression of criticism.In the absence of fair use, the critic using the film, as well as YouTube, would be directly liable for a lot of money for playing portions of it. However, we counterbalance and limit the economic right of the filmmaker in order to promote free speech values through fair use. In essence, we keep a liability-free zone for criticism and that is generally seen as a net positive for users. It also promotes the creation of open platforms, allowing those speakers to discover audiences and build engagement.But in consumer privacy we have not seen nearly the same benefit yielded back to consumers in exchange for the mostly liability-free zone. There is no race to the top in guarding consumer’s personal information, because the profit maximizing effort isn’t about augmenting our privacy, it is about tearing it down as much as possible for profit. This is why we keep getting these privacy scandals. There is no need to apply morality to the analysis as often happens when people observe corporate behavior, but rather the simple question of how profit maximization (which corporations have to pursue under the law) is being countered by law to reflect our expectations.When we look at the problem of consumer privacy from this angle, it becomes fairly clear that private rights of action for consumer personal privacy would be transformative. No longer would a corporation view experiments with handling personal information as a generally risk free profit making proposition if financial damages and a loss of profit were involved.Industry wants a Consumer Privacy Law—Just so Long As You Can’t Sue ThemThe long road of industry opposition—and the extreme hypocrisy of now pretending to endorse passage of a comprehensive consumer privacy law—is worth reflecting on in order to understand why in fact we have no law today.If we go back a little over a decade to a privacy scandal that launched a series of congressional hearings, we find a little company called NebuAd that specialized in deep packet inspection.NebuAd’s premise was scary in that it proposed to allow your ISP to record everything you do online and then monetize it with advertisers. I was a staffer on Capitol Hill when NebuAd came to Congress to explain their product, and still remember the general shock at the idea being proposed. In fact, the idea was so offensive it garnered bi-partisan opposition from House leaders and ultimately led to the demise of NebuAd.The legislative hearings that followed the growing understanding of “deep packet inspection” led to discussions of a comprehensive privacy bill in 2009. But despite widespread concern with developing industry practices as the technology was evolving, we never got anywhere out of concern for the freemium model of Internet products. It is hard to remember this time, but back then the Internet industry was still a fairly new thing to the public and Congress.The iPhone had just launched two years earlier, and the public was still in the process of transitioning from flip phones to smartphones. Only three years prior had Facebook become available to the general public. Google had only a small handful of vertical products, the newest being Google Voice—which allowed people to text for free at a time when each text you sent cost a fee.All of these things were seen as net positive to users, yet all hinged on the monetization of personal information being relatively liability free. So for years policymakers, including an all out effort by the White House in 2012, searched for a means to balance privacy with innovation. Companies generally known as “big tech” today were still very sympathetic entities in that the innovations they continued to produce were seen as both novel and useful to people. Therefore, their involvement was actively solicited by the White House in trying to jointly draft a means to promote privacy while allowing the industry to flourish.Ultimately, it was a wasted effort because what industry actually wanted was the liability free zone to be baked into law with little regard to increasing degradation of user privacy. It used to be that most of the Internet companies still had competitors with each other forcing them to try to be more attractive to users with greater privacy settings. Even Google Search was facing a direct assault by Microsoft with their fairly new Bing product.As efforts to figure out a privacy regime for Internet applications and services were being stalled by the Internet companies, progress was being made with the substantially more mature and the already regulated Internet Service Provider (ISP) industry.Congress had already passed a set of privacy laws for communications companies under Section 222 of the Communications Act, so a great many ISPs, being former telephone companies, had a comprehensive set of privacy laws applicable to them (including private rights of action). But their transition into broadband companies began to muddy the waters, particularly as the Federal Communications Commission started to say in 2005 that broadband was magically different and therefore should be quasi-regulated.Having learned nothing from the fiasco of NebuAd and potentially having “deep packet inspection” banned for ISPs, other privacy invasive ideas kept getting rolled out by the broadband industry. Things such as “search hijacking”—where your search queries were monitored and rerouted—became a thing. AT&T began forcibly injecting ads into WiFi hotpots at airports, wireless ISPs preinstalled “Carrier IQ” on phones to track everything you did (which ended when people sued them directly under a class action lawsuit), and Verizon invented the “super-cookie,” prompting a privacy enforcement response from the FCC in 2014.Even after the FCC stopped treating broadband as uniquely different from other communications access technology in 2015, the industry continued to push the line. In that same year telecom carriers partnered with SAP Consumer Insight 365[9] to “ingest” data from 20 to 25 million mobile subscribers close to 300 times every day (we do not know which mobile telephone companies participate in this practice, as that information is kept a secret). That data is used to inform retailers about customer browsing, geolocation, and demographic data.So unsurprisingly, the FCC came out with strong, clear ISP privacy rules in 2016 that continued the long tradition of privacy protections for our communication networks.However, the heavily captured Congress, which had never taken a major pro-privacy vote on Internet policy in close to a decade, quickly took action to repeal the widely supported FCC privacy rules on behalf of AT&T and Comcast. Ironically, the creation of ISP privacy rules by the FCC only happened because Congress created a series of privacy laws, including private rights of action, for various aspects of our communication industry more than a decade prior.While many of the leaders of the ISP privacy repeal effort claim to be foes of big tech, they have done literally next to nothing to move a consumer privacy law. In fact, all they did was solidify the capture of Congress by giving AT&T and Comcast a reason to team up with Google and Facebook in opposing real privacy reform.EFF witnessed this joint industry opposition first hand as we attempted to rectify the damage Congress did to broadband privacy with a state law in California. In fact, between ISPs and big tech we had absolutely no new privacy laws passed in the states in 2017 in response to Congress repealing ISP privacy rules.Despite the arrogant belief they could sustain perpetual capture at the legislative level, along came an individual named Alastair McTaggert who personally financed a ballot initiative on personal privacy that later became the California Consumer Privacy Act (CCPA).While they could “convince” a legislator of the righteousness of their cause with political contributions, they had no real means to convince the individual that the status quo was good. After Cambridge Analytica and wireless carriers selling geolocation to a black market for bounty hunters, virtually no one thinks this industry should be unregulated on privacy.So rather than continue to publicly oppose real privacy protections, the industry has opted to pretend it supports a law just so long as it gets rid of state laws (including state private rights of action), putting all our eggs into the basket of a captured regulator. In other words, they only will support a federal privacy law if it further erodes our personal privacy rather than enhance it.This opening offer from industry is a wild departure from other privacy statutes that have all included an individual right to sue such as wiretaps, stored electronic communications, video rentals, driver’s licenses, credit reporting, and cable subscriptions. Not to miss their marching orders, industry-friendly legislators were quick to put together a legislative hearing on consumer privacy that literally had no one representing consumers.But this game by industry, where so long as they can hold and finance enough legislators to prevent any real law from passing, will only last so long. Afterall, their effort to ban states from passing privacy laws is effectively dead once the Speaker of the House from California made it clear she would not undermine her own state’s law on behalf of industry.Furthermore, Senator Cantwell, a leader on the Senate Commerce Committee, has introduced comprehensive legislation that includes a private right of action and more than a dozen Senators led by Senator Schatz have endorsed the concept supported by EFF of creating an information fiduciary. As more and more legislators make publicly clear the parameters of what they consider a good law, it becomes harder for industry to sustain the behind the scenes opposition. But we are still far away from the end, which means more has to be done in the states until enough of Congress can break free of the industry shell game.If We Do Not Restore Trust in Internet Products, People Will Make Less Use of the Internet and That Comes with Serious ConsequencesAs we wrestle with containing COVID-19, a solution being proposed by Apple and Google in the form of contact tracing is facing a serious hurdle. A majority of Americans do not want to use health data applications and services from these companies because they do not trust what they will do with their information. Since they can’t directly punish these companies for abusing their personal health data, they are exercising the only real choice they have left: not to use them at all.Numerous federal studies from federal agencies such as the Department of Commerce, the Federal Trade Commission, and the FCC all point to the same end result if we do not have real privacy protections in place for Internet activity. People will simply refrain from using applications and services that involve sensitive uses such as healthcare or finances. In fact, lack of trust in how our personal information is handled has a detrimental impact on broadband adoption in general. Meaning a growing number of people will just not use the Internet at all in order to keep their personal information to themselves.Given the systemic powerlessness users feel about their personal information when they use the Internet, the dampening effect it has on fully utilizing the Internet, and the loss of broadband adoption, it is fairly conclusive that the near liability free zone is an overall net negative as a public policy. Congress should be working to actively give users back their control, instead of letting the companies with the worst privacy track records dictate users’ legal rights. Any new federal data privacy law must not preempt stronger state data privacy rules and contain a private right of action.While special tailoring has to be done for startups and new entrants with limited finances to ensure they can enter the market under the same conditions Google and Facebook launched, this is not true for big tech.Establishing clear lines of liability and rules for major corporate entities, efforts to launch the next privacy invasive tech will be scrutinized by corporate counsel eager to shield the company from legal trouble. That ultimately is the point of having a private right of action in law. It is not to flood companies with lawsuits, but rather for them to operate in a manner that avoids lawsuits.As users begin to understand that they have an inalienable legal right to privacy when they use the Internet, they will begin to trust the products with greater and more sensitive uses that will benefit them. This will open new lines of commerce as a growing number of users willingly engage in deeply personal interactions with next generation applications and services. For all the complaints industry has about consumer privacy laws, the one thing they never take into account is the importance of trust. Without it we start to lose the full potential of what the 21st century Internet can bring.Ernesto Falcon is Senior Legislative Counsel at the Electronic Frontier Foundation with a primary focus on intellectual property, open Internet issues, broadband access, and competition policy.
Another day, another example of copyright acting as censorship. The folks over at Unicorn Riot have been covering the protests around the country, but apparently they can't do that as they'd like because copyright is getting in the way. Unicorn Riot announced on Twitter that video interviews they had conducted and posted have been pulled down from both Facebook and YouTube due to copyright claims such as this one:
Gaining new knowledge doesn't have to be hard. In fact, it can be easy and fun. Learnable is an eLearning platform providing you with handpicked lessons and courses on coding languages. Choose from a wide range of different courses to suit your goals — C#, C++, PHP, Swift, Java, SQL and more! And if all this coding stuff gets kind of tiring, you can even take a break with the built-in Meditation Mini-App. Learnable is available on both iOS and Android, meaning you can learn to code anytime and anywhere. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
First off, I would like to thank Mike Masnick and Techdirt for publishing my post on the George Floyd killing and the (in my eyes) justifiable destruction of police property as an answer to years of injustice and "bad apple" excuses. Very few sites would have published such a post. Most would have rejected it after reading the title.I also appreciate the commenters who weighed in, including those who disagreed with me. It was a strong stance for me to take and I expected to be drowned in criticism. That I wasn't buried by critics perhaps demonstrates my points were well-made. Or it may just indicate the general public is sick and tired of cop bullshit -- bullshit they far too often walk away from, thanks to generous union contracts, the almost-obligatory judicial application of qualified immunity, or the continued sheltering of police officers from personal responsibility by legislators.But I did want to respond to one comment in the thread in particular. This comment suggested I was off-base and that peaceful protests are productive and have resulted in systemic changes. Despite the evidence I had laid down that being peaceful and seeking change through acceptable routes has been a net loss over the last 50+ years, a commenter suggested otherwise.This is the central argument of the comment submitted by one of our many anonymous commenters. (Just a reminder, we love anonymous commenters and would never demand you give us all your vitals in exchange for your ability to comment on articles. We also allow you to turn ads off if you wish with no financial obligation. That being said, there are multiple ways to support this fiercely independent site, so click thru if you'd like to help. Thanks!)
For a long time, we've noted how broadband usage caps are bullshit. They don't actually help manage congestion, they have nothing to do with "fairness," and are little more than glorified price hikes on the backs of captive customers in uncompetitive markets. Worse, they can be abused anti-competitively by incumbent broadband providers, one of the major triggers of the net neutrality debate.For example, AT&T for a while has made its own streaming TV services exempt from its usage caps, while competing streaming services (Netflix, Amazon, whatever) count against a user's monthly data allotment. This gives AT&T a distinct advantage in that users are incentivized to avoid competing services lest they face completely arbitrary and unnecessary usage limits and fees. It's bullshit. It has always been bullshit.AT&T has added another layer to this bullshit cake. The company has long experimented with something called "sponsored data," which lets companies pay AT&T extra if they want to be exempt from AT&T's (again, completely arbitrary and unnecessary) broadband usage caps. This adds yet another anti-competitive layer to the equation by letting a deep pocketed company (say: ESPN) get a distinct advantage over smaller startups that can't afford to pay AT&T's toll.Last week AT&T launched yet another streaming TV service, HBO Max. This service also won't count against AT&T's usage caps and overage fees, AT&T confirmed to The Verge:
Back in 2013, we made clear our concerns with the Italian communications watchdog AGCOM setting up new administrative copyright enforcement powers that would allow them to simply up and declare sites to be infringing, at which point ISPs would be ordered to block websites. Soon after that Italy's public prosecutor seemed to decided that part of his job was also to order websites blocked based solely on the public prosecutor's say so.In the latest such order from the Public Prosecutor's office declaring a list of sites to be infringing, apparently Italy has decided that the famous and wonderful Project Gutenberg website, which is a repository of public domain books, must be blocked. I don't know about the other 27 sites listed in the order, but Project Gutenberg is no piracy site. Yet here it is at number 25 on the list:They even go to the trouble of looking up the whois info. You would think that maybe someone would recognize that a site founded in 1996 maybe is not a giant piracy site:The Italian Library Association is asking what the fuck is going on (translation via Google Translate):
When we last talked about the Geo Group, a company making hundreds of millions of dollars running private prisons, one of its executives was attempting to improve the company's reputation by constantly removing all the dirty from the Wikipedia page about the company. In trying to do this, of course, the company actually amplified the controversies listed on Wikipedia and, having been caught trying to scrub the internet of its own sins, found itself in headlines as a result. At present, the Wikipedia page still lists those controversies, but more on that in a moment.Because the latest bit of news from Geo Group is that it is suing Netflix over the use of its logo in a fictional prison in Messiah.
Keep your mitts off cellphones if you don't have a warrant. That's the message at least one court is sending to law enforcement. A 2014 decision by the US Supreme Court introduced a warrant requirement for cellphone searches. Since then, cops mostly seem to be complying with the mandate. Of course, this half-assed analysis of mine rests solely on federal cases I've managed to catch drifting downstream in the internet flotsam, so it's far from conclusive. But -- unlike the SCOTUS decision erecting a warrant requirement for historic cell site location info -- there doesn't seem to be much gray area in the Riley decision for law enforcement to explore.But what exactly is a "search" in the Fourth Amendment/Riley context? It depends on which court you ask. The most straightforward reading of the Riley decision would be a warrant requirement for a search of a phone's contents. But a couple of courts have read this decision even more narrowly. Riley doesn't just cover full-fledged searches of device contents. It also covers more sneaky peeks of suspects' phones.In 2016, a federal court ruled that the FBI's opening of a flip phone (roughly one week after the suspect's arrest) violated the Fourth Amendment. Even the recognition that the home screen of a phone was subject to a "diminished" expectation of privacy couldn't save the feds' search. The court said the FBI's search of the unexposed area of the phone -- the closed screen -- was a search and subject to Riley. To rule otherwise would be to allow the government to use similar cursory examinations to dodge the warrant requirement or unlawfully seek info to buttress affidavit claims in warrant requests for a more thorough search.
This week, we've got a special cross-post from 16 Minutes On The News — an excellent tech podcast by a16z that's well worth subscribing to. For the latest episode, host Sonal Chokshi interviewed Mike all about Section 230 and Trump's recent executive order about social media — and as you might imagine, it took a lot longer than 16 minutes! We've got the complete interview here on the Techdirt Podcast.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
The recent Copyright Office report on Section 512 of the DMCA (the notice and takedown provisions) has been frustrating on many levels, including the fact that it simply ignores that the public is a stakeholder (actually the main stakeholder) in copyright policy. But one of the most frustrating parts of the report is that it ignored a ton of testimony (including some provided by me) about how frequently the 512 notice-and-takedown process is abused (either on purpose or accidentally) to take down non-infringing content. The Copyright Office acts as if this is a fringe issue, when the data suggests it's a massive problem impacting millions.And just to put a pretty fine point on it, you probably heard about or (hopefully) saw the launch this weekend of the SpaceX Dragon capsule, with the first private manned mission to space, that was done in conjunction with NASA. It was pretty cool, and a ton of people tuned in to watch it live. Of course, many also tuned in the previous Wednesday to try to watch the original planned launch, before it got scrubbed due to weather. NASA had a wonderful live stream going for both (which I watched). And works produced by NASA are in the public domain -- which is why many other broadcasters were easily able to use them as well.But because the numbskulls at NBC Universal work with the default mindset that everything must be owned, and if everything must be owned, then obviously anything that NBC Universal broadcasts must be owned by NBC Universal, it made bogus copyright claims on a ton of others using NASA's footageincluding NASA itself leading to NASA's own public domain video being blocked on NASA's own YouTube page.
As Facebook employees stage a digital walk-out and make their thoughts known about the social media giant’s choice to not intervene in any way on “political posts”, especially those of President Donald Trump, some have called for the newly-created Oversight Board to step up and force a change in Facebook. While the official answer is that they can’t start (because supposedly they haven’t given out laptops yet), the real and very simple reason why the Facebook Oversight Board won’t get involved is because it can’t. It’s not created to function that way, it’s not staffed for something like this, and ultimately, due to its relationship with Facebook, anything it would say on this matter right now would be taken in an advisory capacity at best. Facebook, understandably not wanting to actually give any of its power away, played confidence games with the idea of external, independent oversight, and it’s clear that they fooled a lot of people. Let me explain.In three-card-monte, the huckster keeps shuffling three playing cards until the victim is likely to guess wrong on where the “money card” may be hiding, and proceeds to flop the cards one by one. For Facebook’s prestidigitation on content moderation, last month’s announcement of the initial 20 highly-regarded experts tapped as members for its independent oversight board is the second card flop, and predictably, the money card is not there.The ongoing sleight of hand performed by Facebook is subtle but fundamental. The board was set up as truly independent, in every way, from member to case selection and to the board’s internal governance. In terms of its scope and structure, it is guided by previously-released bylaws to primarily handle a small set of content removal cases (which come up to the board after exhausting the regular appeals process), and dictate Facebook to change its decisions in those cases. To a much lesser extent, the Board can, although time and resources are not allocated for this, provide input, or recommendations about Facebook’s content moderation policies, however, Facebook is not obligated in any way to follow those policy recommendations, but to simply respond in 30 days and talk about any action it may take.In the pages of the San Francisco Chronicle’s Open Forum, and elsewhere, I and others have called attention to this empty action as far back as September 2019, at the first card flop, the public release of the Board’s charter and bylaws. The project continued unabated and unchanged as friendly experts extolled the hard work of the team and preached optimism. Glaring concerns over the Board’s advisory-at best, non-binding overall power, not only weren’t addressed, but actually dismissed by cautioning that board member selection, last month’s flop, would be where the money card is. Can you spot the inconsistency? It doesn’t matter if you have the smartest independent advisors if you’re not giving them the opportunity to actually impact what you do. Of course, the money card wasn’t there.In early May, the Menlo Park-based company released the list of its Oversight Board membership, with impressive names (former heads of state, Nobel Prize laureates and subject matter experts from around the world). Because the Board is truly independent, Facebook’s role was minimal, beyond coming up with said structure and bylaws with the consultation of experts from around the world (full disclosure: the author was also involved in one round of consultations in mid 2019), it only directly chose the 4 co-chairs who then were heavily involved in the choice of the other 16 members. A lot of chatter around this announcement focused, predictably, on who the people are; is the board diverse; is it experienced enough, etc, while some, have even focused on how independent the board truly is. As the current crisis is showing, none of that matters.As we witness the Board’s institutionalized, structural and political inability to perform oversight it is becoming entirely clear that Facebook is not, at all, committed to fixing its content moderation problems in any meaningful way, and that political favor is more important than consistently applied policies. There is no best case scenario anymore as the Board can only fail or infect the rest of the industry. And what is a lose-lose for all of us will likely still be a win-win for Facebook.The bad case scenario is the likeliest: the Board is destined to fail. While Zuckerberg’s original ideas of transparency and openness were great on paper, the Board quickly turned into just a potential shield against loud government voices (such as Big Tech antagonist Sen. Hawley). Not only is that not working, Sen. Hawley responded to the membership list with even harsher rhetoric, but the importance placed on the optics versus the reality of solving this problem is even more obvious now. Giving the Board few, if any, real leverage mechanisms over the company can at most build a shiny Potemkin village and not an oversight body. If we dispense with all the readily-available evidence to the contrary, and give Facebook the benefit of the doubt that it tried, the alternative reasons for this rickety and impotent construction are not much better. It may be because giving a final say over difficult cases, the Board’s main job, is not something Facebook was comfortable with doing by itself anyway (and who can blame them given the pushback the platform gets with any high-profile decision). Or it may be because of a bizarre allegiance to the flawed constitutional law perspective that Facebook can build itself a Supreme Court, which makes the Board act as an appellate court of sorts, with a vague potential for creating precedent rather than truly providing oversight.If the Board’s failure doesn’t tarnish the perspective of a legitimate private governance model for content moderation, there’s a lot to learn on how to avoid unforced errors. First, we can safely say that while corporations may be people, they are definitely not states. Creating a pseudo judiciary without any of the accouterments of a liberal-democratic state, such as a hard-to-change constitution, co-equal branches and some sort of social contract is a recipe for disaster. Second is a fact that theory, literature and practice have long argued: structure fundamentally dictates how this type of private governance institution will run. And with an impotent Board left to mostly bloviate after the fact, without any real means to make changes to the policies themselves, this structure clearly points to a powerless but potentially loud “oversight” mechanism, pushed to the front, as a PR stunt, but unequipped to deal with the real problems of the platform. Finally, we see that even under intense pressure from numerous and transpartisan groups, and a potential openness to fixing a wicked problem, platforms are very unwilling to actually give up, even partly, their role and control in moderating content, but will gladly externalize their worst headaches. If their worst headaches were aligned with the concerns of their users, that would be great, but creating “case law” for content moderation is an exercise in futility, as the company struggles to reverse-engineer Trump-friendly positions with its long-standing processes. We don’t have lower court judges who get to dutifully decide whether something is inscribed in the board’s previous actions. We have either underworked, underpaid and scarred people making snap decisions every minute, or irony and nuance illiterate algorithms who are poised to interpret these decisions mechanically. And more to the point, we have executives deciding to provide political cover to powerful players rather than enforce their own policies, knowing full well they’re not beholden to any oversight, since even if already up and running, by the time the Board ruled on this particular case, if ever, the situation would have since no longer been of national importance.As always, there still is a solution. The Oversight Board may be beyond salvaging, but the idea of a private governance institution, where members of the public, civil society, industry and even government officials, can come together and try to reach a common ground for what the issues are and what the solutions might be, should still flourish, and should not be thrown away simply because Facebook’s initial attempt was highly flawed. Through continued vigilance and genuine, honest critiques of its structure and real role in the Facebook ecosystem, the Oversight Board can, at best, register as just one experiment of many, not a defining one, and we can soldier on with more diverse, inclusive, transparent, and flexible, industry-wide dialogues and initiatives.The worst case scenario is if the Board magically coasts through without any strong challenge to its shaky legitimacy, or its impotent role. The potential for this to happen is there, since there are more important things in the world to worry about than whether Facebook’s independent advisory body has any teeth. In that case Facebook intends to, one way or another, franchise it to the rest of the industry. And that would be the third, and final flop. However, as I hope you figured it out by now, the money card wouldn't be there either. The money card, the card that Facebook never actually intended on giving away or even showing us, the power over content moderation policies, was never embedded in the structure of the board, its membership or any potential industry copycats that could legitimize it. This unexpected event allowed us to take a peek at the cards, the money card is still where it was all along, in Facebook’s back pocket.David Morar is Associate Researcher at the Big Data Science Lab at the West University of Timisoara, Romania
The Ultimate Learn To Play Piano Bundle has 10 courses designed to take your music skills from beginner to advanced. You'll learn Music Theory, how to read and write music, how to play chords and chord progressions. You'll also learn about how to compose melodies, how to train your ears to recognize different types of chords and keys, and much more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Over the last few months, it's been weird to watch how any time we point out that there's no actual evidence of anti-conservative bias in the content moderation practices of social media, some in our comments absolutely lose their shit. One commenter, has been on a rampage in just the last week to declare me an evil liar for refusing to admit the "obvious" fact that there's anti-conservative bias in moderation. However when I and others ask these people for that evidence, it never seems to show up.I imagine they are not going to like this story either. A new study from CrowdTangle, a data analytics firm that is owned by Facebook, and has access to Facebook data, seems to suggest that if there's any bias, it goes the other way:
So we've noted repeatedly how AT&T's entry into the video space hasn't gone according to plan. First, the company spent so much money on mergers ($150 billion for Time Warner and DirecTV) in recent years, it effectively crippled itself with debt. Second, the company passed that merger debt on to most of its customers in the form of price hikes, which defeated the whole point of "cutting the TV cord." Third, AT&T launched so damn many confusing streaming brands simultaneously, it even confused the company's own employees.Collectively, this resulted in AT&T losing 3.43 million TV subscribers last year alone, which certainly wasn't the kind of sector domination executives originally envisioned.And there's every indication that things might get worse.As noted, AT&T already offers a very confusing array of TV services: HBO Go, HBO Now, AT&T Now, AT&T TV, AT&T WatchTV, AT&T U-verse (IPTV) and DirecTV (satellite). Last week the company launched yet another streaming platform, HBO Max. But there's trouble in paradise: because of contractual standoffs with Amazon and Roku, the service apparently won't be appearing on either platform at launch. Given Roku is the most popular streaming hardware in America by a pretty wide margin (39% market share in 2019), that's kind of a problem for AT&T:
We can't have nice things. We can't even have mediocre things. And, in the midst of a global pandemic, we can't even have basic things. The Bangladesh government hasn't exactly discovered the power of censorship. The government and this power are already acquainted. But with a novel virus in the air, the government has discovered it can silence speech more effectively.
Regular readers of Techdirt will be all too familiar with the problem of corporate sovereignty -- the ability of companies to sue entire countries for alleged loss of profits caused by government action. Also known as investor-state dispute settlement (ISDS), there have been indications that some countries are starting to drop ISDS from trade and investment treaties, for various reasons. But a worrying report from Corporate Europe Observatory suggests that we are about to witness a new wave of corporate sovereignty litigation. Hard though it may be to believe, these cases will be claiming that governments around the world should be reimbursing companies for the loss of profits caused by tackling COVID-19:
There was a window of opportunity for cops following the George Floyd killing. Floyd, suspected of nothing more than passing a fake $20 bill, was killed by Officer Derek Chauvin of the Minneapolis PD. Chauvin placed his knee on Floyd's neck until he was dead. This act lasted for nearly nine minutes -- and for nearly three minutes after Chauvin checked for a pulse and found nothing. Yet he persisted, and none of the three cops around him stopped him.Chauvin has been criminally charged and is under arrest. We'll see where that takes us. But the opportunity was there for the rest of the nation's cops to separate themselves from this "bad apple." Cop defenders ignore what bad apples do to barrels, but we won't. Chauvin is a symptom. He is not the disease.As protests broke out around the nation, law enforcement agencies responded. While a small number attempted to find middle ground with aggrieved citizens, most acted as though they were a law unto themselves in these troubled times.One site got it completely right -- a site that so often offers up hot takes that it is the source of its own meme. Slate, of all places, nailed this call:
We've noted repeatedly how interstate inmate calling service (ICS) companies have a disturbingly cozy relationship with government, striking (technically buying) monopoly deals that let them charge inmate families $14 per minute. Worse, some ICS companies like Securus Technologies have been under fire for helping the government spy on privileged inmate attorney communications, information that was only revealed in 2015 after Securus was hacked. Given the apathy for prison inmates and their families ("Iff'n ya don't like high prices, don't go to prison, son!") reform on this front has been glacial at best.The 2015 Hacker-obtained data featured 70 million records of phone calls (and recordings of the phone calls themselves), placed by prisoners in at least 37 different states over a two-and-a-half year period. Of particular note were the estimated 14,000 recordings of privileged conversations between inmates and their lawyers:
Clearview is currently being sued by the attorney general of Vermont for violating the privacy rights of the state's residents. As the AG's office pointed out in its lawsuit, users of social media services agree to many things when signing up, but the use of their photos and personal information as fodder for facial recognition software sold to government agencies and a variety of private companies isn't one of them.
Online privacy can’t be solved bygiving people new property rights in personal data. That idea isbased on a raft of conceptual errors. But consumers are alreadyexercising property rights, using them to negotiate the trade-offsinvolved in using online commercial products.People meana lot of different things when they say “privacy.”Let’s stipulate that the subject here is control of personalinformation. There are equal or more salient interests and concernssometimes lumped in with privacy. These include the fairness andaccuracy of big institutions’ algorithmic decision-making,concerns with commodification or commercialization of online life,and personal and financial security.Consumers’ use of online serviceswill always have privacy costs and risks. That tension is acompetitive dimension of consumer Internet services that should neverbe “solved.” Why should it be? Some consumers areentirely rational to recognize the commercial and social benefitsthey get from sharing information. Many others don’t want theirinformation out there. The costs and risks are too great in theirpersonal calculi. Services will change over time, of course, andconsumers’ interests will, too. Long live the privacy tension.Online privacy is not an all-or-nothingproposition. People adjust their use of social media and onlineservices based on perceived risks. They select among options, useservices pseudonymously, and curtail and shade what they share. So,to the extent online media and services appear unsafe orirresponsible, they lose business and thus revenue. There is nomarket failure, in the sense usedin economics.Of course, there are failures of thecommon sort all around. People say they care about privacy, but don’tdo much to protect it. Network effects and other economies of scalemake for fewer options in online services and social media, so thereare fewer privacy options, much less bespoke privacy policies. Andcompanies sometimes fail to understand or abide by their privacypolicies.Those privacy policies are contracts.They divide up property rights in personal information very subtly—sosubtly, indeed, that it might be worth reviewing whatproperty is: a bundle of rights to possess, use,subdivide, trade or sell, abandon, destroy, profit, and excludeothers from the things in the world.The typical privacy policy vests theright to possess data with the service provider—a bailment, inlegal terminology. The service provider gets certain rights to usethe data, the right to generate and use non-personal information fromthe data, and so on. But the consumer maintains most rights toexclude others from data about them, which is all-important privacyprotection. That’s subject to certain exceptions, such asresponding to emergencies, protecting the network or service, andcomplying with valid legal processes.When companies violate their privacypromises, they’re at risk from public enforcement actions—fromAttorneys General and the Federal Trade Commission in the UnitedStates, for example—and lawsuits, including class actions.Payouts to consumers aren’t typically great becauseindividualized damages aren’t great. But there are economies ofscale here, too. Paying a little bit to a lot of people is expensive.A solution? Hardly. It’s morelike an ongoing conversation, administered collectively andepisodically through consumption trends, news reporting, publicawareness, consumer advocacy, lawsuits, legislative pressure, andmore. It’s not a satisfactory conversation, but it probablybeats politics and elections for discovering what consumers reallywant in the multi-dimensional tug-of-war among privacy, convenience,low prices, social interaction, security, and more.There is appeal in declaring privacy ahuman right and determining to give people more of it, but privacyitself fits poorly into a fundamental-rights framework. Peopleprotect privacy in the shelter of other rights—common law andconstitutional rights in the United States. They routinely dispensewith privacy in favor of other interests. Privacy is better thoughtof as an economic good. Some people want a lot of it. Some peoplewant less. There are endless varieties and flavors.In contrast to what’s alreadyhappening, most of the discussion about property rights in personaldata assumes that such rights must come from legislative action—aproperty-rights system designed by legal and sociological experts.But experts, advocates, and energetic lawmakers lack the capacity todiscern how things are supposed to come out, especially given ongoingchanges in both technology and consumers’ information wants andneeds.An interesting objection to creatingnew property rights in personal data is that people might continue totrade personal data, as they do now, for other goods such as low- orno-cost services. That complaint—that consumers might get whatthey want—reveals that most proposals to bestow new propertyrights from above are really information regulations in disguise.Were any such proposal implemented, it would contend strongly in themetaphysical contest to be the most intrusive yet impotent regulatoryregime yet devised. Just look at the planned property-rights systemin intellectual property legislation. Highly arguable net benefitscome with a congeries of dangers to many values the Internet holdsdear.The better property rights system isthe one we’ve got. Through it, real consumers are roughly andunsatisfactorily pursuing privacy as they will. They often—butnot always—cede privacy in favor of other things they wantmore, learning the ideal mix of privacy and other goods through trialand error. In the end, the “privacy problem” will no morebe solved than the “price problem,” the “qualityproblem,” or the “features problem.” Consumers willalways want more and better stuff at a lower cost, whether costs aredenominated in dollars, effort, time, or privacy.Jim Harper is a visiting fellow at the American Enterprise Institute and a senior research fellow at the University of Arizona James E. Rogers College of Law.
Warning: this post will contain what we in the business like to call strong language, invective, and violent content. Govern yourself accordingly.Content warning 2: possibly exceedingly long.ONCE UPON A TIME, A MAN GOT FUCKEDLet's start with a story:(Those of you who'd like to read a transcript, rather than watch this powerful performance by Orlando Jones [possibly for "Dear God, I'm still at work" reasons], can do so here.)This is the history of black Americans. For a few hundred years, they weren't even Americans. And even after that -- even after the Civil War -- black Americans spent a hundred years being shunted to different schools, different neighborhoods, different restrooms, different bus seating, different water fountains. They are not us, this land of opportunity repeatedly stated.Integration was forced. It was rarely welcomed. Being black still means being an outsider. Four hundred years of subjugation doesn't just end. This is how the story continues:
Joe Biden had a golden opportunity to actually look Presidential, and stand up for free speech and the 1st Amendment at a moment when our current President is seeking to undermine it with his Executive order that is designed to intimidate social media companies into hosting speech they'd rather not, and scare others off from fact checking his lies. And he blew it. He doubled down on the ridiculous claim that we should "revoke" Section 230.
Businesses, small to big enterprises, depend on data science. It's a field responsible for evaluating and interpreting data, statistics, and trends to help businesses arrive at better decisions and actions. The 2020 All-in-One Data Scientist Mega Bundle will help you learn and master different data processes such as visualization, computing, analysis, and more. Over 12 courses, you will also learn how to use data across different platforms and languages including Python, Apache, Hadoop, R, and more. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
There's kind of a lot going on in America right now -- what with widespread protests about police violence (leading to more police violence), and we're still in the middle of the largest pandemic in a century. You'd think some of those things would be priorities for Congress, but instead, Senate Republicans have decided that now is the time to pushing ahead with helping Hollywood by examining how to make copyright worse. Even the Washington Post is completely perplexed as to how this could possibly be a priority right now.
Two years ago, an investigation by the Associated Press and Princeton computer scientists found that Google services on both Android and Apple routinely continued to track user location data, even when users opted out of such tracking. Even if users paused "Location History," the researchers found that some Google apps still automatically stored time-stamped location data without asking the consumer's consent.Fast forward two years later, and Arizona Attorney General Mark Brnovich has sued Google for violating the Arizona Consumer Fraud Act over the practice. The lawsuit (pdf), filed in Maricopa County Superior Court, is based off of an investigation begun by Brnovich's office back in 2018. Like the aforementioned AP report, the AG found that Google's settings didn't actually do what they claimed they did in regards to ceasing location data tracking:
The only news network further to the right than Fox News has just seen its baseless libel lawsuit against MSNBC host Rachel Maddow dismissed under California's anti-SLAPP law. While Fox occasionally has to acknowledge the real world and employs a few newscasters critical of the President and his policies, One American News Network (OAN/OANN) apparently feels no compunction to address any issues honestly, preferring to curl up in the lap of the leader of the free world.OAN sued after Maddow offered her commentary on a Daily Beast article that said the news network employed a "Kremlin-paid journalist." The journalist, Kristian Rouz, had been working for both OAN and the Kremlin-owned Sputnik, the latter of which was determined to be a participant in Russia's 2016 election interference effort.Maddow's commentary was somewhat hyperbolic, and very critical of OAN and its double-agent journalist. But OAN took particular issue with a single phrase Maddow said during her broadcast. From the decision [PDF]:
We've got a double winner this week, with That One Guy taking first place for both insightful and funny with some thoughts on Trump's social media executive order:
The esports momentum due to the COVID-19 pandemic isn't slowing down. And one of things many people are learning now that they're either spectating or participating in esports for the first time is just how hard it is to be really, really good in these competitions. The days that bore the cliches about unskilled gamers slothing in their parent's basement are long gone, replaced by corporate sponsorships for sold out events in full-scale arenas. In the absence of traditional IRL sports at the moment, many professional athletes are now getting into esports as well, with autoracing having led the way.And now, in an occurrence that basically had to happen, we have our first instance of a professional racer getting caught having a gamer stand in for him during a race.
We've long noted that roughly twenty states have passed laws either outright banning community broadband, or tightly restricting such efforts. The vast majority of the time these bills are literally written by telecom lobbyists and lawyers for companies like AT&T and Comcast. While the bills are usually presented by lawmakers as an earnest concern about taxpayer boondoggles, the real motivation usually is the prevention of any disruption of their cozy geographical monopolies/duopolies.In some states, community broadband is being offered via the local power utility. That's the case in Tennessee, where Chattanooga-based EPB has been prohibited from expanding despite the overall lack of competitive options in the state -- and despite EPB having been rated one of the best ISPs in America. When ISPs can't get straight out bans passed via state legislature, they'll usually trying to bury such restrictions in unrelated bills, such as when AT&T tried to include community broadband restrictions in an unrelated Missouri traffic ordinance.Hugely frustrated by substandard service and a lack of broadband competition, more than 750 communities around the country have built some sort of community broadband network. But even when legislation intended to help them is proposed, it's an uphill battle to try and keep entrenched telecom lobbyists from making the bills worse. Case in point: Louisiana is considering Senate Bill 407, which would let utilities expand broadband to their rural customers. But provisions buried in the bill at the last second restrict utilities from offering broadband anywhere an incumbent already offers service:
The decision this post discusses, Waite v. Universal Music Group, came out at the end of March, but, as one of the leading cases litigating the termination provision of the copyright statute, it's still worth the attention. Maybe even especially now, as the Copyright Office overtly goes to bat for rightsholders. Because the termination provision speaks to who the rightsholders actually are. Without it, it's likely to not actually be the artists behind the creation of the works.The decision does a good job at least partially explaining why the termination provision is important:
I know we've gone through this a bunch already, but there remains no evidence to support the claims of "anti-conservative bias" at major social media platforms. Some people (usually self-claiming conservatives, though they rarely seem to represent actual conservative principles) get really angry about this. But, oddly, none ever seem to present any actual evidence.Of course, the very underpinnings of the White House's silly and nonsensical executive order regarding social media is that of course there's is anti-conservative bias in the moderation, and it even points to the action that kicked off this entire temper tantrum from the thin-skinned President: they provided a link under his debunked conspiracy theory tweet about mail-in ballots. Many Trump supporters and the executive order itself argue that this kind of fact checking is only done to conservatives:
Privacy laws can have a lot of moving pieces from notices anddisclosures, opt-in and opt-out consent requirements to privacydefaults and user controls. Over the past few years, there has beensignificantprogresson these issues because privacy advocates, consumer groups, industryvoices, and even lawmakers have been willing to dive intodefinitional weeds, put options on the table, and find middle ground.But this sort of thoughtful debate has not happened when it comes tohow privacy laws should be enforced and what should happen whencompanies screw up, families are hurt, and individuals’ privacyis invaded.Instead, when it comes to discussingprivate rights of action and agency enforcement, rigid red lines havebeen drawn. Consumer groups and privacy advocates say letindividuals sue in court -- and callit a day. Business interests, when they talk about“strongenforcement,” often mean letting an underfundedFederal Trade Commission and equally-taxed state Attorneys Generalhandle everything. Unfortunately, this binary,absolutist dispute over policing privacy rightsthreatens to sink any progress on privacy legislation.It happened in Washington state, whichfailed to enact a comprehensive privacy framework in March becauseof a single sentence that could have let someconsumers sue to enforce their rights under the state’s generalConsumer Protection Act. Private rights of action have stymied stateprivacy task forces, and the issue is consuming efforts by theUniformLaw Commission to craft a model privacy bill. This isbut a microcosm of what we’ve seen at the federal level, wherelawmakers are at “loggerheads”over private rights of action.This impasse is ridiculous. Advocacygroups share some blame here, but industry voices have failed to putany creativity into putting an alternative path forward. Companyafter companyand tradeassociation after tradeassociation have come out in favor of privacy rules,but the response to any concern about how to ensure those rules arefollowed has been crickets. Few seem to have given much thought intowhat enforcement could look like beyond driving a Brinks truck fullof money up to the FTC. That is not good enough. If industry isserious about working toward clear privacy rules, business interestshave two obligations: (1) they should offer up some new ideas toboost enforcement and address legitimate concerns about regulatorylimitations and capture; and (2) they need to explain why privaterights of action should be a non-starter in areas where businessesalready are misbehaving.First, while we can acknowledge thegood work that the FTC (and state Attorney Generals) has done, weshould also concede that agencies cannot address every privacyproblem and have competing consumer protection priorities.Commentators laudthe FTC’s privacy work but have not suggested how an FTC withmore resources will not just do more of what it’salready doing. There are outstanding considerations animating effortsto create an entirelynew federal privacy agency (and that’s on top ofa proposal in California to set up its own entirely new “PrivacyProtection Agency”). Improving the FTC’s privacy posturewill require more than more money and personnel.Part of this will be creatingmechanisms that ensure individuals can get redress. One idea would beto require the FTC to help facilitate complaint resolutions. TheConsumer Financial Protection Bureau alreadydoes this to some extent with respect to financialproducts and services. The CFPB welcomes consumer complaints -- andthen works with financial companies to get consumers a directresponse about problems. These complaints also help the CFPB identifyproblems and prioritize work, and then CFPB publishes (privacyfriendly) complaint data. This stands in contrast to the FTC’sConsumerSentinel Network, which is a black box to the public.Indeed, the FTC’s complaintsystem is opaque even to complainants themselves. The black boxnature of the FTC is, fairly or not, a constant criticism by privacyadvocates. A group of advocates began the Trump administration bycallingfor more transparency from the Commission about how ithandles complaints and responds to public input. I can speak to thisissue, submitting myown complaint to the FTC about the privacy andsecurity practices of VPNs in 2017. Months later, the FTC put out abriefblog post on the issue, which I took to be the end ofthe matter on their end. Some sort of dualtrackinformal and formal complaint process like the FederalCommunications Commission could be one way to ensure the FTC bettercommunicates with outsiders raising privacy concerns.These are mostly tweaks to FTC process,however, and while they address some specific complaints aboutprivacy enforcement, they don’t address concerns thatregulators have been missing -- or avoiding -- some of the biggestprivacy problems we face. This is where the rigid opposition toprivate rights of action and failure to acknowledge the largerconcern is so frustrating.Sensitive data types present a goodexample. Unrestrained collection and use of biometricsand geolocationdata have become two of the biggest privacy fights ofthe moment. There has been a shocking lack of transparency orcorporate accountability around how companies collect and use thisinformation. Their use could be the key to combating the ongoingpandemic; their misuse a tool for discrimination, embarassment, andsurveillance. If ever there were data practices where more oversightis needed, these would be it.Yet, the rapid creep of facialrecognition gives us a real-world test case for how agencyenforcement can be lacking. While companies have been calling fordiscussions about responsible deployment of facial recognition evenas they pitch this technology to every school, hospital, and retailerin the world, Clearview AI just up and ignored existing FTCguidance and state law. Washington state has anexisting biometric privacy law, which the state Attorney Generaladmitted has never been the basis of an enforcement action. To myknowledge, the Texas Attorney General also has never brought a caseunder that state’s law. Meanwhile, the Illinois BiometricPrivacy Act (BIPA) maybe theone legal tool that can be used to go after companieslike Clearview.BIPA’s private right of actionhas been a recurring thorn in the sides of major social mediacompanies and theme parks rolling out biometrics technologies, but noone has really cogently argued that companies aren’t flagrantlyviolating the law. Let’s not forget that facial recognitionsettings were anunderappreciated part of the FTC’s most recentsettlement with Facebook, too. However, no one can actually discusshow to tweak or modernize BIPA because industry groups have had asingle-minded focus on stripping the law of all its privateenforcement components.Industry has acted in lockstep toinsist it is unfair for companies to be subject to limitlessliability by the omnipresent plaintiffs bar for every minor ortechnical violation of the law. And that’s the rub!There is no rule that says a privateright of action must encompass the entirety of a privacy law. One ofthe compromises that led to the California Consumer Privacy Act wasthe inclusion of a private right of action for certain unreasonabledata breaches. Lawmakers can take heed and go provision-by-provisionand specify exactly what sorts of activities could be subject toprivate litigation, what the costs of the litigation might be, andwhat remedies can ultimately be obtained.The U.S. Chamber of Commerce has beenat the forefront of insistingthat private rights of action are poor tools for addressing privacyissues, because they can “undermine appropriate agencyenforcement” and hamper the ability of “expert regulatorsto shape and balance policy and protections.” But what’sthe objection then in areas where that’s not true?The sharing and selling of geolocationinformation has become especially pernicious, letting companies infersensitive health conditions and facilitating stalking. Can anyindustry voice argue that companies have been well-behaved when itcomes to how they use location information? The FTC clearly stated in2012 that precise geolocation data was sensitive informationwarranting extra protections. Flash forward to 2018and 2019,where The New York Times is engaged in annual exposéson the wild west of apps and services buying and selling “anonymous”location data. Meanwhile, the Communications Act requires carriers toprotect geolocation data, and yet the FCC finedall four major wireless carriers a combined $200million for sharing their subscribers’ geolocation data withbounty hunters and stalkers in February of this year.Businesses do not need regulatoryclarity when it comes to location data -- companies need to put in apenalty box for an extended timeout. Giving individuals the abilityfor private injunctive relief seems hardly objectionable given thistrack record. Permitting class actions for intentional violations ofindividuals’ geolocation privacy should be on the table, aswell.There should be more to discuss than auniverse where trial attorneys sue every company for every privacyviolation or a world where lawmakers hand the FTC a blank check.Unfortunately, no one has yet put forward a vision for what theoptimum level of privacy enforcement should be. Privacy researchers,advocates, and vulnerable communities have forcefully said the statusquo is not sufficient. If industry claims it understands theimportance of protecting privacy but just needs more clarity aboutwhat the rules are, companies should begin by putting forward someplans for how they will help individuals, families, and communitieswhen they fall short.Joseph Jerome, CIPP/US, is a privacy and cybersecurity attorney based in Washington, D.C. He currently is the Director of Multistate Policy for Common Sense Media.
Inthe early hours of December 31,2019 weeks before the coronavirus was recognized as a buddingpandemic, Taiwanese Centers for Disease Control Deputy Director LuoYijun was awake, browsingthe PTT Bulletin Board.A relic of 90s-era hacker culture, PTTis an open source internet forumoriginally created by Taiwanese university students. On the site'sgossip board, hidden behind a warning of adult content, Yijun found adiscussion about the pneumonia outbreak in Wuhan.However, the screenshots from WeChat posted to PTT described aSARS-like coronavirus, not the flu or pneumonia. The threadidentified a wet market as the likely source of the outbreak,indicating that the disease could be passed from one species toanother. Alarmed, Luo Yijun warned his colleagues and forwarded hisfindings to China and the World Health Organization (WHO). Thatevening, Taiwan began screening travelers from Wuhan, acting on theinformation posted to PTT.Aniche Internet forum, not the WHO or Chinese Communist Party (CCP),notified Taiwan, and the world more broadly, of the seriousness ofCOVID-19 – the disease caused by the new coronavirus. The sameday, Wuhan’sMunicipal Health Commission describedthe disease as pneumoniaand cautioned against assumptions of human-to-human transmission.While Chinese health authorities downplayed the seriousness of theoutbreak, a lightly governed websitehelped information about the disease to escape China’s GreatFirewall. As viral misinformation inspires skepticism of free speechin the west and conservativelegal scholars express admiration for China’s system ofinformation control,this episode illustrates the value of unfiltered speech.PTT’sgossip board is not fact checked by experts, and while the board hassome rules, it is a place for gossip rather than verified informationor news. The forum is governed far more liberally than contemporarysocial media platforms with extensive community standards and tens ofthousands of paid moderators. While bulletin boards have largelyfallen out of favor with western Internet users, PTT probably is mostcomparable to 4chan, the Something Awful forums, or Hackernews. Inthe past, it has hostedleaked government surveillance proposals,and Chinese officials have recently complainedabout the siteas a source of abusive speech about the WHO.Thereis a real difference between lightly governed or unmoderated spaces,essentially ruled by the First Amendment (which inevitably play hostto the good, the bad, and the ugly) and platforms that arespecifically curated to highlight vulgar or illiberal content. 4chancontains image boards dedicated to fashion, travel, umpteen forms ofJapanese animation, and /pol, a board for politically incorrectconversation that receives an outsized amount of attention inmainstream media. The Daily Stormer is a blog for white nationalists.We must resist the urge to condemn ungoverned fora alongside badlygoverned forums simply because both provide platforms for noxiousspeech.Because the Daily Stormer is specifically curatedto highlight neo-Nazi speech, we can safely assume that it won’thost valuable information. Its gatekeepers explicitly selectfascistic speech for publication before the content goes live and areunlikely to grant a platform to anything else. It certainly isn’ta hangout for anonymous epidemiologists. 4chan, on the other hand,contains its fair share of extremist speech but the platform is notmoderated by fascists, nor, for the most part, anyone at all. 4chanhosts almost any sort of speech; despite being unverified, usefulinformation may still be posted there. Due to its lack of formalgatekeeping, users’ comments are not screened for eitheraccuracy or good taste. As a result of 4chan’s norm ofanonymous participation, prominence, and popularity with particularlyactive internet trolling communities in the mid-aughts, the sitegained a reputation as an informational free-for-all, rendering it auseful dumping ground for both leaks of authentic nonpublicinformationand unhinged conspiracy.Evenas its prominence has diminished, 4chan’s reputation ensuresthat it remains a popular space to share privileged information,often in concert with other essentially unmoderated publicationservices such as Pastebin. Last year, Newsof Jeffrey Epstein’s deathwas first leaked on the site. While it can be difficult to prove theveracity of any one claim, the existence of such a place--anungoverned information clearinghouse--has undeniable value.Ungoverned fora allow arguments, assertions, and media to be freelyshared and considered without giving undue authority to unprovenassertions.Becauseusers participate anonymously or pseudonymously, they cannot relyupon, and subsequently do not risk, their permanent personalreputations and credentials. Likewise, it is the very popularity ofthese message boards as information clearinghouses that makes themattractive to bad actors. If you want to publish a sensitive message,for good or for ill, lightly moderated platforms are good tools forthe job.Althoughthese platforms may spread disinformation, if read with a healthydose of skepticism the content they carry is not per-se dangerous.Crucially, they fail differently than, in this case, Chinese statehealth authorities, which had political reasons to downplay theseriousness of the outbreak. Rather than providing filtered,authoritative information that can cause widespread harm ifincorrect, such as the WHOrecommendations against mask use published throughout March,open fora host many unfiltered claims that, without supportingevidence, carry little authority whatsoever. A healthy informationecosystem will contain both trustworthy authorities, and bottom upinformation distribution networks that can correct institutionalfailures. In a world in which seemingly authoritative sources are nottrustworthy, unfiltered platforms will gain credence, for good andill.However,as Luo Yijun’s late night discovery on PTT demonstrates,unverified information can inform and illuminate, especially in theabsence of trustworthy authoritative information. Furthermore, ifused effectively, open-source information hosted on ungovernedplatforms can enhance the capability and legitimacy of traditionalinstitutions, such as the Taiwanese CDC. Liberally governed platformsare often blamed for their role in transmitting falsity and hate butseldom lauded when they facilitate the spread of life-savinginformation.WillDuffield is a Policy Analyst at the Cato Institute
The Hands-On Game Development Bundle has 10 courses of instruction on using various platforms and languages to develop your own games. You'll learn C++, Node.js, Godot, and others. You will build a turn-based, micro-strategy game, develop a 2D platformer level using tiles, develop an AR spaceship-shooting game, and more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
We wrote a detailed breakdown of the President's silly, nonsensical, legally wrong Executive Order regarding social media yesterday. A few hours later the official version came out, and it was somewhat different than the draft (though, in no ways better). If you want to see the differences between the draft and the final version, here's a handy dandy redline version put together by Professor Eric Goldman.The new version inserted a bunch more nutty ramblings that have no legal meaning, but should the executive order ever need to be challenged in court, more or less made it clear that this was done vindictively. It honestly reads like Trump read the draft and whined that there wasn't enough about how unfair everyone is to him and what a meanie Rep. Adam Schiff has been in investigating the President. Separately, the very fact that the draft changed so drastically from the night before to the moment of release shows that it was drafted hastily, which provides even more evidence that it was done directly in retaliation for Twitter fact checking his false claims.The biggest change in the final version is that beyond setting up a "working group," the final version instructs the Attorney General to "develop model legislation for consideration in States where existing statutes do not protect Americans from such unfair and deceptive practices." Theoretically, this might become a nuisance, but (1) Barr already put together such a working group last year, and (2) had already been working on various legislative proposals to undermine Section 230, including the EARN IT Act that we've discussed at great length.One other notable change is in the instructions given to the FCC, which (despite having literally no legal authority over websites) is to come up with an interpretation of Section 230 (also, the FCC has no reason or basis to interpret Section 230, as that's a job for the courts). The difference from the draft is that it instructs this analysis to look at "the interaction" between the two clauses of the Good Samaritan clause:
When people mention the digital divide, often they’re referring to the divide between people who have access to the internet and those who do not. However, we can also visualize it as the divide between those who benefit from free expression on social media and other digital platforms—and those who don’t.In order to get ahead of this burgeoning digital divide, policymakers will need to preserve the values of privacy and consumer choice in a way that one does not undermine the other.This past February, the New York Times profiled Jalaiah Harmon, the creator of the viral TikTok dance, “The Renegade.” But Harmon didn’t create the dance on TikTok; she used a smaller app, Funimate, and crossed-posted her video to Instagram. Instagram is where other popular TikTok creators first learned of the dance. TikTok, like many platforms, doesn’t encourage posters to give credit to creators.Instead of Jalaiah benefiting from the virality of her own dance, other TikTok users did. Those benefits include brand deals, media opportunities, and the chance to connect with the professional dance world. If Jalaiah had been able to easily cross-post from Funimate to TikTok, she may have been able to benefit from “The Renegade” right from the start.Apps like Funimate, Dubsmash, and Likee offer smaller, vibrant communities, often popular with users of color and other marginalized communities. These smaller platforms may provide functionality that other apps don’t, or they may just foster community in a way that appeals more to users that are not considered mainstream who want to preserve their unique culture. Apps like TikTok may not provide that opportunity, and that is okay when consumers have choices in the marketplace.However, because these communities are smaller, users have fewer opportunities to monetize their creativity. These smaller applications also have a harder time benefiting from the creativity of their users. In the case of Jalaiah, instead of new users flocking to Funimate to check out Jalaiah’s other videos, TikTok benefited from the dance and probably grew its user base because of it.In Washington, conversations about interoperability (the technical capability of different platforms to communicate with each other and work together) have become one of several pro-competition, pro-consumer choice policy solutions to gain notice.In the TikTok/Funimate case, interoperability would allow users to create videos on Funimate but have them viewable on TikTok. This functionality would also make it easier for TikTok users to leave TikTok if they thought another video sharing app would provide them with better content, better usability, or just a better community.One of the most common excuses by tech companies to avoid engaging in interoperability, or even basic data sharing at the user’s request, is that doing so may violate concerns about preserving users’ privacy. This excuse is meant to force policymakers to give platforms a reprieve from either more stringent privacy protections, or, if Congress must pass comprehensive privacy rules, to lock in existing platforms and online companies with a competitive advantage. That is a false choice.Most platforms get a lot of data from their users. Whether it’s for personalizing the user experience, targeting ads, or both, internet companies collect so much personal information that they know a lot about what the user wants, who their user is, what the user does, whom the user connects with, what the user likes, and where the user moves.As a result, it is often hard to stop using a platform or leave for its competition. We call this concept the cost of exclusion. If leaving a platform equals leaving memories, artistic works, or friends behind, or even abandoning a digital-self that represents us in ways that we can’t offline, then very few people are going to do it. The social cost is too high.Without a growing user base, newer platforms often can’t compete with older, dominant players. This is especially problematic for platforms that cater to marginalized groups like people of color, queer people, or people with disabilities.Interoperability can help new platforms build up a store of data they can use to improve their services, because when they gain a new user, that user can also bring access to their data and portions of their social graph from the old service. This can increase the power of users “voting with their feet” by leaving one service to switch to another. If users’ data becomes shared across services, then the new service they’ve chosen can doubly benefit: It gets a new user and a new source of data.But while sharing data can be useful to both users and platforms alike, how do we preserve users’ privacy? And how can we prevent the data from being exploited?First, we need a comprehensive privacy law. A comprehensive law would set a baseline expectation for preserving user privacy, regardless of the size of an online service or platform. Baseline expectations between platforms give all users, regardless of what platform they choose, protection against data discrimination or other privacy violations.Second, we need interoperability rules that govern internet platforms to be a part of the privacy conversation. These rules wouldn’t just govern how platforms are made interoperable, but would also give users additional privacy protections. As a baseline, interoperability rules could limit how platforms use the data they get from interoperable systems. The rules could also prevent platforms from using that data for advertising or any other purpose not explicitly requested by the user.With combined privacy and interoperability protections, an individual user will remain protected and as their data moves from one platform to the next, with the freedom to share and benefit from their creativity without accepting weaker privacy or giving into the cost of exclusion from a dominant platform. If a user does decide to use an interoperable system, then that user’s friends’ or followers’ data could be available to the new platform if the consent is given by the users friends for interoperable sharing.The internet is a powerful tool for free expression and, as such, we must preserve spaces where marginalized groups congregate, create, and interact as a community. Niche communities may not represent your individual viewpoint, and some may be outright hateful, but if we are to preserve consumer choices for free expression for some communities, we cannot deny it for others.If larger platforms are essentially stealing the content, work, and ideas of users on smaller platforms, then that harms not only the individual who created the content, but the original platform that housed the content. Privacy-preserving interoperability could be the solution to preserving spaces for marginalized communities, while still allowing them to benefit from their work.
Early last year, a federal court dumped a lawsuit filed by alt-right figureheads Laura Loomer and Freedom Watch (Larry Klayman's organization) alleging multiple online platforms were engaging in a government-enabled conspiracy to silence them. Mixing and matching liberally from precedent that didn't say what the plaintiffs thought it said, the lawsuit tried to skirt around things like Section 230 immunity by pretending this was about being unconstitutionally blocked from entering public spaces.The lawsuit has now been rejected twice. The DC Circuit Appeals Court decision [PDF] sums up the action at the lower level, noting that it's affirming the call made by the district court.
New Zealand has been in the censorship business for years, but the government appears to believe it's still not doing enough censoring. Legislation stemming from the government's reaction to the live-streamed Christchurch shooting seeks to expand its ability to block content it deems to be objectionable. In most cases, this means content related to terrorism or violent extremism. But the livestreaming of a mass shooting has created an open-ended definition for the government to work with in conjunction with its criminalization of this act.Newsroom has written up a very thorough examination of the proposed law, with this chilling bit found all the way at the end of its article.
Remember Denuvo? Back in the far simpler times of 2016-2018, which somehow seem light years better than 2020 despite being veritable dumpster fires in and of themselves, we wrote a series of posts about Denuvo's DRM and how it went from nigh-uncrackable to totally crackable upon games being released with it. Did we take a bit too much pleasure in this precipitous fall? Sure, though our general anti-DRM stance sort of mandated dunking on a company that once touted itself as invincible. Either way, it started to get comical watching publishers release a game with Denuvo, have the game cracked in a matter of days, if not hours, and then release a patch to remove Denuvo entirely from the game.Due in part to this, Denuvo eventually announced it would be shifting its focus away from producing DRM that didn't work to making anti-cheat software. So, how is that going? Well, let's take a look at Doom Eternal, a game which only a week ago added to Denuvo's anti-cheat software via an update.
SmileDirectClub -- maker of in-home dental appliances -- is back in the lawsuit business. A couple of years ago, the company sued Lifehacker over an article originally titled "You Could Fuck Up Your Mouth With SmileDirectClub." The company claimed any criticism of its products and techniques was defamatory. Despite the original inflammatory headline, the Lifehacker piece was even-handed, warning potential customers that semi-DIY dental work has some downsides. SmileDirect voluntarily dismissed the lawsuit a week later, perhaps sensing a judge -- even one in bogus lawsuit-friendly Tennessee -- might not agree that critical opinions, however harsh, were libelous.Apparently hoping to undermine the "defamation" market, SmileDirectClub began tying refunds to gag orders, refusing to give unhappy customers back their money unless they signed a non-disparagement agreement. Now, SmileDirect is headed back to court to take on NBC for its critical news report. This time, SmileDirect has to talk its way past a revamped state anti-SLAPP law to get the $2.85 billion it's seeking in this lawsuit. (h/t Daniel Horwitz)The lawsuit [PDF] appears to have been filed by lawyers being paid by the word. It's over 200 pages long and comes with a comprehensive table of contents. Longer does not mean better-developed. And it also doesn't mean the legal arguments are stronger than those found in more sensibly-sized filings.SmileDirect says NBC's report did an incredible amount of damage to its business.
Moderation is a platform operator saying "we don't do that here". Discretion is you saying "I won't do that there". Censorship is someone saying "you can't do that anywhere" before or after threats of either violence or government intervention.Regular Techdirt commenters have seen that paragraph show up often in recent months. But what does it really mean? Well, as the person who crafted that bit (and who uses it on a regular basis), I'mma do you an explain.ModerationModeration is a platform operator saying "we don't do that here". When I use that phrase, I may cite a column from a blog called Thagomizer. (That column helped me start crafting my bit in the first place.) In the column, writer Aja Hammerly refers to it as a "magic" phrase:
After over a decade of largely uncritical admiration from journalists, policymakers, and the public, the United States' biggest tech companies have experienced a swift fall from grace.Facebook, Google, and Amazon are the subject of long overdue scrutiny, investigations, and legal proceedings in jurisdictions around the world for their widespread and repeated violations of people's privacy, while their executives no longer enjoy the glowing reputations they once did. After Cambridge Analytica, YouTube’s record-breaking COPPA fine for illegally tracking children, and a nearly endless list of other privacy transgressions, the Silicon Valley companies deserve all the scrutiny they're getting and then some.But Silicon Valley tech companies aren't the only ones violating our privacy with impunity, and focusing on them as the sole villains allows a whole host of co-conspirators to get off scot-free. These other companies aren't doing less objectionable things with your data, and they haven't demonstrated that they're more worthy of consumer trust, or less likely to be breaking the law.Policymakers and tech journalists need to take off their "big tech" blinders and focus more energy on the lesser-known privacy violators benefiting from Facebook and Google's absorption of the critical oxygen. When we're talking about powerful companies surreptitiously creating information about you and using it to make important decisions about your life, threaten your safety, or violate your privacy, Facebook and Google shouldn't be the only companies we're talking about—because they're far from being the only source of the problem.Take the telecom industry, for example. All of the biggest telecommunications companies have been caught violating their customers' privacy, often at the same scale and to the same degree of flagrancy as their Silicon Valley peers.In 2016, Verizon was fined $1.35 million by the FCC for tracking the browsing history of users on its mobile network without their knowledge and consent. Two years later, a Verizon-owned ad tech company paid the then-highest COPPA fine to the New York state Attorney General for illegally tracking children. AT&T was fined $25 million by the FCC for failing to protect consumer data after AT&T employees stole the names and full or partial Social Security numbers of around 280,000 customers, then sold them to third parties.More recently, an investigation by the Norwegian Consumer Protection Council found that an AT&T-owned ad tech company was among those receiving granular location information and information on users’ sexual orientation from dating apps like Grindr and Tinder. All the biggest carriers—Verizon, AT&T, T-Mobile and Sprint—were found to be illegally selling customers’ real-time location data to anyone who wanted to buy it.Not only do the telecoms violate the privacy protections we have, they tirelessly lobby to make them weaker and worse. They fought the FCC's broadband privacy rules in 2016, then (successfully) convinced Congress to negate them in 2017, lied about the privacy implications of encrypting DNS queries, and are trying to pass a Trojan horse privacy law that would calcify an exploitative status quo. Criticisms of "big tech's" exploitation of people's privacy that ignore big telecom miss the forest for the ISPs.Then there's the ad tech industry. Behavioral advertising, which targets people with ads based on information about their browsing history or offline behavior rather than their current online activity, is in many ways the internet’s original privacy sin. The profit motive it supplies for companies to track our every move and keep us scrolling and clicking for as long as possible is responsible for much of the toxicity, disinformation, and privacy violations that we've become inured to.Behavioral advertisers have done everything they can to link the viability of innovative web services to highly profitable surveillance of users, while claiming that any attempts to weaken or sever that link will break the internet. Their business model is what makes so many online services inherently and unavoidably privacy-invasive when they don’t have to be.Given their reversal of fortunes, one could say that Facebook, Google, and the other Silicon Valley giants have taken the whipping boy role in privacy policy discussions that data brokers used to occupy. Data brokers were a heavy focus of consumer protection-minded policymakers at the FTC and the White House for a number of years, and while the attention previously paid to them has shifted, the venality of their business model hasn’t. The core business of these companies is to collect and infer sensitive information about as many people as possible, and to share and sell those assessments to any company that wants to buy them, like advertisers, health insurance companies, educational institutions, hedge funds, and others.These companies traffic in lists of sexual assault survivors, which students are undocumented or are using birth control, and which of us is most likely to be a vulnerable target for predatory loans, all while remaining staunchly contemptuous of the prerogative of legislators to reign them in. Nothing about data brokers’ exploitation of our data is less objectionable or malignant than what Facebook and Google are doing with it, but #DeleteLiveramp won’t get anyone’s attention.This list of companies that rake in enormous profits for violating your privacy and deserve more notoriety for it isn’t short. There’s the app developers, location aggregators, the credit reporting agencies, and insurance companies. There’s also the brick-and-mortar companies that are falling all over themselves to build exactly the same kinds of tracking and analytic capabilities that Facebook and Google have weaponized, or contract out for them when they can’t. All of these companies actively, eagerly take part in the kinds of privacy violations that the Silicon Valley companies have grown notorious for, and a focus on the Silicon Valley companies alone minimizes the threat, and distorts the real problem.There are some ways in which Facebook, Google, and Amazon present uniquely severe concerns by virtue of their size and ubiquity: it’s not as though their widespread notoriety wasn’t repeatedly earned. Moreover, examples cited here of transgressions by non-Silicon-Valley companies only exist because tech journalists or policymakers decided to focus on that not-Facebook and not-Google issue.Nor does expanding the focus of tech critics to lesser-known privacy violators mean that regulators should exclusively focus on small companies at the expense of big ones, as the FTC has correctly been criticized for. Neither policymakers nor journalists should ignore the 400 pound gorillas in the room or the havoc they wreak. But there are other actors in the data collection ecosystem that deserve to be just as notorious as the Silicon Valley giants, and whose conduct deserves just as much attention.For all the investigation and criticism that Amazon’s Rekognition deserves, policymakers and journalists shouldn’t ignore lesser-known facial recognition technology vendors like NEC or Idemia. As Clearview AI has demonstrated in dramatic fashion, a hitherto-unknown company can turn out to be engaged in practices as bleakly dystopian as you can imagine, as soon as it receives the kind of scrutiny that the biggest tech companies have been experiencing for years.Don't let Mark or Sundar shrink into the shadows—just make Hans (Vestberg, Verizon), Brian (Cassin, Experian), and Scott (Howe, LiveRamp) household names alongside them. Conversations about privacy policy, investigations into violators, and blame for the exploitative wretchedness of the current ecosystem shouldn’t solely focus on Mountain View and Menlo Park at the expense of ignoring Dallas, Atlanta, and Bentonville.At the end of the day, "big tech" means nothing if it excuses the identical privacy transgressions of big telecom, big ad tech, big data broker, and big, well, everything else.
As we continue to deal with the fallout of our thin-skinned President throwing a hissy fit over Twitter daring to provide more context to conspiracy theory nonsense that Trump himself tweeted, Facebook founder and CEO, Mark Zuckerberg, has apparently decided that it's more important to stomp on Twitter while it's down, rather than protect the wider internet. In a shameful display of opportunistic nonsense, Zuckerberg went on Fox News and pretended that Facebook was somehow not interested in moderating content the way Twitter did:
The Accredited Agile Project Management Bundle by SPOCE is designed to equip users with the know-how they need to master Agile project management, PRINCE2 Project Management, and PRINCE2 Agile Project Management. You'll learn the skills needed for managing and delivering successful projects. You'll also gain an understanding of risk management, planning, handling change, and more. It's on sale for $99.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
We've officially reached pure silly season when it comes to internet regulations. For the past two years now, every so often, reports have come out that the White House was exploring issuing an executive order trying to attack Section 230 and punish companies for the administration's belief in the myth that content moderation practices at large social media firms are "biased" against conservatives.However, it apparently took Twitter literally doing nothing more than linking to people arguing that Trump's tweets were misleading, to cause our President to throw a total shit fit and finally break out the executive order. This one is somewhat different than drafts that have been floated in the past, though it has the same origins (and, according to a few people I spoke to, this new executive order was "hastily drafted" to appease an angry President who can't stand the idea that someone might correct his nonsense). You can read the draft that get sent around to everyone last night. The final version is expected to be at least somewhat close to this.To be clear: the executive order is nonsense. You can't overrule the law by executive order, nor can you ignore the Constitution. This executive order attempts to do both. It's also blatantly anti-free speech, anti-private property, pro-big government -- which is only mildly amusing, given that Trump and his sycophantic followers like to insist they're the opposite of all of those things. But also, because the executive order only has limited power, there's a lot of huffing and puffing in there for very little actual things that the administration can do. It's very much written in a way to make Trump's fans think he's done something to attack social media companies, but the deeper you dig, the more nothingness you find.Let's dig into this clusterfuck of nonsense. It starts out with what might sounds like a sensible argument, if you don't understand the ins-and-outs of Section 230, by saying that because Section 230's "good samaritan" clause requires good faith, that "pretextual actions restricting online content or actions inconsistent with an online platform's terms of service" are somehow not covered by 230: