Time and time again we've highlighted how, in the modern era, you don't really own the hardware you buy. Music, ebooks, and videos can disappear on a dime without recourse, your game console can lose important features after a purchase, and a wide variety of "smart" tech can quickly become dumb as a rock in the face of company struggles, hacks, or acquisitions, leaving you with pricey paperweights where innovation once stood.The latest case in point: Google acquired Waterloo, Ontario based North back in June. For several years, North had been selling AR capable "smart" glasses dubbed Focal. Generally well reviewed, Focal glasses started at $600, went dramatically up from there, and required you visit one of two North stores -- either in Brooklyn or Toronto -- to carefully measure your head using 11 3D modeling cameras. The glasses themselves integrated traditional prescription glasses with smart technology, letting you enjoy a heads up display and AR notifications directly from your phone.But with the Google acquisition, North posted a statement to its website, stating the company was forced to make the "difficult decision" to wind down support for Focal as of the end of July, at which point the "smart" tech will become rather dumb:
On February 8, 1996, President Clinton signed into law the Telecommunication Act of 1996. Title V of that act was called the Communications Decency Act, and Section 509 of the CDA was a set of provisions originally introduced by Congressmen Chris Cox and Ron Wyden as the Internet Freedom & Family Empowerment Act. Those provisions were then codified at Section 230 of title 47 of the United States Code. They are now commonly referred to as simply “Section 230.”Section 230 prohibits a “provider or user” of an “interactive computer service” from being “treated as the publisher or speaker” of content “provided by another information content provider.” 47 U.S.C. § 230(c)(1). The courts construed Section 230 as providing broad federal statutory immunity to the providers of online services and platforms from any legal liability for unlawful or tortious content posted on their systems by their users.When it enacted Section 230, Congress specified a few important exceptions to the scope of this statutory immunity. It did not apply to liability for federal crimes or infringing intellectual property rights. And in 2018, President Trump signed into law an additional exception, making Section 230’s liability protections inapplicable to user content related to sex trafficking or the promotion of prostitution.Nevertheless, critics have voiced concerns that Section 230 prevents the government from providing effective legal remedies for what those critics claim are abuses by users of online platforms. Earlier this year, legislation to modify Section 230 was introduced in Congress, and President Trump has, at times, suggested the repeal of Section 230 in its entirety.As critics, politicians, and legal commentators continue to debate the future of Section 230 and its possible repeal, there has arisen a renewed interest in what the potential legal liability of online intermediaries was for the content posted by their users under the common law, before Section 230 was enacted. Thirty years ago, as a relatively young lawyer representing CompuServe, I embarked on a journey to explore that largely uncharted terrain.In the pre-Section 230 world, every operator of an online service had two fundamental questions for their lawyers: (1) what is my liability for stuff my users post on my system that I don’t know about?; and (2) what is my liability for the stuff I know about and decide not to remove (and how much time do I have to make that decision)?The answer to the first question was not difficult to map. In 1990, CompuServe was sued by Cubby, Inc. for an allegedly defamatory article posted on a CompuServe forum by one of its contributors. The article was online only for a day, and CompuServe became aware of its contents only after it had been removed, when it was served with Cubby’s libel lawsuit. Since there was no dispute that CompuServe was unaware of the contents of the article when it was available online in its forum, we argued to the federal district court in New York that CompuServe was no different from any ordinary library, bookstore, or newsstand, which, under both the law of libel and the First Amendment, are not subject to civil or criminal liability for the materials they disseminate to the public if they have no knowledge of the material’s content at the time they disseminate it. The court agreed and entered summary judgment for CompuServe, finding that CompuServe had not “published” the alleged libel, which a plaintiff must prove in order to impose liability on a defendant under the common law of libel.Four years later, a state trial court in New York reached a different conclusion in a libel lawsuit brought by Stratton Oakmont against one of CompuServe’s competitors, Prodigy Services Co., based on an allegedly defamatory statement made in one of Prodigy’s online bulletin boards. In that case, the plaintiff argued that Prodigy was different because, unlike CompuServe, Prodigy had marketed itself as using software and real-time monitors to remove material from its service that it felt were inappropriate for a “family-friendly” online service. The trial court agreed and entered a preliminary ruling that, even though there was no evidence that Prodigy was ever actually aware of the alleged libel when it was available on its service, Prodigy should nevertheless be deemed the “publisher” of the statement, because, in the court’s view, “Prodigy has uniquely arrogated to itself the role of determining what is proper for its members to post and read on its bulletin boards.”The Stratton Oakmont v. Prodigy ruling was as dubious as it was controversial and confusing in the months after it was issued. CompuServe’s general counsel, Kent Stuckey, asked me to address it in the chapter I was writing on defamation for his new legal treatise, Internet and Online Law. Tasked with this scholarly mission in the midst of one of the digital revolution’s most heated legal controversies, I undertook to collect, organize and analyze every reported defamation case and law review commentary in this country that I could find that might bear on the two questions every online service faced: when are we liable for user content we don’t know about and when are we liable for the user content we know about but decide not to remove?With respect to the first question, the answer dictated by the case law for other types of defendants who disseminate defamatory statements by others was fairly clear. As I wrote in my chapter, “[t]wo common principles can be derived from these cases. First, a person is subject to liability as a ‘publisher’ only if he communicates a defamatory statement to another. Second, a person communicates that statement to another if, but only if, he is aware of its content at the time he disseminates it.” Hamilton, “Defamation,” printed as Chapter 2 in Stuckey, Internet & Online Law (Law Journal-Seminars Press 1996), at 2-31 (footnotes omitted).I concluded that the trial court had erred in Stratton Oakmont because it failed to address what the term “publish” means in the common law of libel—to “communicate” a statement to a third party. When an intermediary disseminates material with no knowledge of its content, it does not “communicate” the material it distributes, and therefore does not “publish” it, at least as that term is used in the law of libel. Thus, whether the intermediary asserts the right of “editorial control” over the content provided by others, and the degree of such control the intermediary claims to exercise, are immaterial to the precise legal question at issue: did the defendant “communicate” the statement to another? I wrote:
Another day, another bunch of nonsense about Section 230 of the Communications Decency Act. The Senate Commerce Committee held an FTC oversight hearing yesterday, with all five commissioners attending via video conference (kudos to Commissioner Rebecca Slaughter who attended with her baby strapped to her -- setting a great example for so many working parents who are struggling with working from home while also having to manage childcare duties!). Section 230 came up a few times, though I'm perplexed as to why.Senator Thune, who sponsored the problematic PACT Act that would remove Section 230 immunity for civil actions brought by the federal government, asked a leading question to FTC Chair, Joe Simons, that was basically "wouldn't the PACT Act be great?" and Simons responded oddly about how 230 was somehow blocking their enforcement actions (which is just not true).
The Complete 2020 Learn Linux Bundle has 12 courses to help you learn Linux OS concepts and processes. You'll start with an introduction to Linux and progress to more advanced topics like shell scripting, data encryption, supporting virtual machines, and more. Other courses cover Red Hat Enterprise Linux 8 (RHEL 8), virtualizing Linux OS using Docker, AWS, and Azure, how to build and manage an enterprise Linux infrastructure, and much more. It's on sale for $69.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
About five years ago, frustration at John Deere's draconian tractor DRM culminated in a grassroots "right to repair" movement. The company's crackdown on "unauthorized repairs" turned countless ordinary citizens into technology policy activists, after DRM and the company's EULA prohibited the lion's share of repair or modification of tractors customers thought they owned. These restrictions only worked to drive up costs for owners, who faced either paying significantly more money for "authorized" repair, or toying around with pirated firmware just to ensure the products they owned actually worked.Since then, the right to repair movement has expanded dramatically, with a heavy focus on companies like Apple, Microsoft, Sony and their attempts to monopolize repair, driving up consumer costs, and resulting in greater waste.It has also extended into the medical arena, where device manufacturers enjoy a monopoly on tools, documentation, and replacement parts, making it a nightmare to get many pieces of medical equipment repaired. That has, unsurprisingly, become even more of a problem during the COVID-19 pandemic due to mass hospitalizations and resource constraints, with medical professionals being forced to use grey market parts or DIY parts just to get ventilators to work.Hoping to give the movement a shot of adrenaline, Senator Ron Wyden and Representative Yvette D. Clark have introduced the Critical Medical Infrastructure Right-to-Repair Act of 2020 (pdf), which would exempt medical equipment owners and "servicers" from liability for copying service materials or breaking DRM if it was done so to improve COVID-19 aid. The legislation also pre-empts any agreements between hospitals and equipment manufacturers preventing hospital employees from working on their own equipment, something that's also become more of a problem during the pandemic.From a Wyden statement:
A lawsuit against PACER for its long list of wrongs may finally pay off for the many, many people who've subjected themselves to its many indignities. The interface looks and runs like a personal Geocities page and those who manage to navigate it successfully are on the hook for pretty much every page it generates, including $0.10/page for search results that may not actually give users what they're looking for.Everything else is $0.10/page too, including filings, orders, and the dockets themselves. They're capped at $3.00/each if they run past 30 pages, but for the most part, using PACER is like using a library's copier. Infinite copies can be "run off" at PACER at almost no expense, but the system charges users as though they're burning up toner and paper.Back in 2016, the National Veterans Legal Services Program, along with the National Consumer Law Center and the Alliance for Justice, sued the court system over PACER's fees. The plaintiffs argued PACER's collection and use of fees broke the law governing PACER, which said only "reasonable" fees could be collected to offset the cost of upkeep. Instead, the US court system was using PACER as a piggy bank, spending money on flat screen TVs for jurors and other courtroom upkeep items, rather than dumping the money back into making PACER better, more accessible, and cheaper.A year later, a federal judge said the case could move forward as a class action representing everyone who believed they'd been overcharged for access. A year later, it handed down a decision ruling that PACER was illegally using at least some of the collected fees. The case then took a trip to the Federal Circuit Court of Appeals with both adversarial parties challenging parts of the district court's ruling.The Appeals Court has come down on the side of PACER users. Here's Josh Gerstein's summary of the decision for Politico:
We had just been talking about the upcoming Marvel's Avengers multi-platform game and its very strange plan to make Spider-Man a PlayStation exclusive character. In that post, I mentioned that I don't think these sorts of exclusive deals, be they for games or characters, make any real sense. Others quoted in the post have actually argued that exclusive characters specifically hurt everyone, including owners of the exclusive platform, since this can only serve to limit the subject of exclusion within the game. But when it came to why this specific deal had been struck, we were left with mere speculation. Was it to build on some kind of PlayStation loyalty? Was it to try to drive more PlayStation purchases? Was it some kind of Sony licensing thing?Well, we have now gotten from the head of the publishing studio an...I don't know... answer? That seems to be what was attempted, at least, but I'll let you all see for yourselves, if you can make out what the actual fuck is going on here. The co-leader of Crystal Dynamics gave an interview to ComicBook and touched on the subject.
If you only read one qualified immunity decision this year, make it this one. (At least until something better comes along. But this one will be hard to top.) [h/t MagentaRocks]The decision [PDF] -- written by Judge Carlton W. Reeves for the Southern District of Mississippi -- deals with the abuse of a Black man by a white cop. Fortunately, the man lived to sue. Unfortunately, Supreme Court precedent means the officer will not be punished. But the opening of the opinion is unforgettable. It's a long recounting of the injustices perpetrated on Black people by white law enforcement officers.
A federal judge has happily dismissed one of Devin Nunes' many SLAPP suits. This isn't much of a surprise given what the judge had said back in May regarding Nunes' Iowa-based SLAPP suit (reminder: Iowa has no anti-SLAPP law) against Esquire Magazine and reporter Ryan Lizza. The lawsuit was over this article that Devin Nunes really, really doesn't want you to read: Devin Nunes’s Family Farm Is Hiding a Politically Explosive Secret. Reading that will make Rep. Devin Nunes very, very sad.Back in May, the judge made it clear that he didn't think there was much of a case here, but gave Nunes a chance to try to save the lawsuit. As you can already tell, his lawyer, Stephen Biss, has come up empty in his attempt. The court easily dismisses the case with prejudice. First, the judge goes through the various statements that Nunes/Biss claim are defamatory and says "lol, no, none of those are defamatory."
The Python 3 Complete Masterclass Bundle has 7 courses to help you hone your Python skills. You'll learn how to automate data analysis, do data visualization with Bokeh, test basic script, network automation, and more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Forget banning TikTok, the Trump State Department just suggested it wants to basically ban China from the internet. Rather than promoting an open internet and the concept of openness, it appears that under this administration we're slamming the gates shut and setting up the Great American Firewall for the internet. Under the guise of what it calls the Clean Network to Safeguard America, last night Secretary of State Mike Pompeo announced a program that is full of vague statements, that could, in practice, fragment the internet.This is incredibly disappointing on multiple levels. While other countries -- especially China, but also Iran and Russia -- have created their own fragmented internet, the US used to stand for an open internet across the globe. Indeed, for whatever complaints we had about the State Department during the Obama administration (and we had many complaints), its commitment to an open internet was very strong and meaningful. That's clearly now gone. The "Clean Network to Safeguard America" consists of five programs that can be summed up as "fuck you China."
We've been noting for a few weeks that much of the hysteria surrounding TikTok is kind of dumb. For one, banning TikTok doesn't really do much to thwart Chinese spying, given our privacy and security incompetence leaves us vulnerable on countless fronts. Most of the folks doing the heaviest pearl clutching over TikTok have opposed efforts at any meaningful internet privacy rules, have opposed funding election security reform, and have been utterly absent or apathetic in the quest for better security and privacy practices over all (the SS7 flaw, cellular location data scandals, etc.).Even the idea that banning TikTok meaningfully thwarts Chinese spying given the country's total lack of scruples, bottomless hacking budget, and our own security and privacy incompetence (the IOT comes quickly to mind) is fairly laughable. Banning TikTok to thwart Chinese spying is kind of like spitting at a thunderstorm in the hopes of preventing rain. Genuine privacy and security reform starts by actually engaging in serious privacy and security reform, not (waves in the general direction of Trump's bizarre, extortionist, TikTok agenda) whatever the hell this is supposed to be.I see the entire TikTok saga as little more than bumbling, performative nonsense by wholly unserious people more interested in money, politics, leverage, and power than privacy or national security. Case in point: desperate to create the idea that TikTok is a serious threat, a new document leak reveals that the Department of Homeland Security has spent a good chunk of this year circulating the claim that a nineteen year-old girl was somehow "training terrorists" via a comedy video she posted to TikTok.According to Mainer, the video in question was sent to police departments across Maine by the Maine Information and Analysis Center (MIAC), part of the DHS network of so-called "Fusion Centers" tasked with sharing and and distributing information about "potential terrorist threats." The problem: when you dig through the teen in question's TikTok posts, it's abundantly clear after about four minutes of watching that she's not a threat. The tweet itself appears to have been deleted, but it too (duh) wasn't anything remotely resembling a genuine terrorist threat or security risk:
The French anti-piracy framework known as Hadopi began as tragedy and soon turned into farce. It was tragic that so much energy was wasted on putting together a system that was designed to throw ordinary users off the Internet -- the infamous "three strikes and you're out" approach -- rather than encouraging better legal offerings. Four years after the Hadopi system was created in 2009, it descended into farce when the French government struck down the signature three strikes punishment because it had failed to bring the promised benefits to the copyright world. Indeed, Hadopi had failed to do anything much: its first and only suspension was suspended, and a detailed study of the three strikes approach showed it was a failure from just about every viewpoint. Nonetheless, Hadopi has staggered on, sending out its largely ignored warnings to people for allegedly downloading unauthorized copies of material, and imposing a few fines on those unlucky enough to get caught repeatedly.As TorrentFreak reports, Hadopi has published its annual report, which contains some fascinating details of what exactly it has achieved during the ten years of its existence. In 2019, the copyright industry referred 9 million cases to Hadopi for further investigation, down from 14 million the year before. However, referral does not mean a warning was necessarily sent. In fact, since 2010, Hadopi has only sent out 12.7 million warnings in total, which means that most people accused of piracy don't even see a warning.Those figures are a little abstract; what's important is how effective Hadopi has been, and whether the entire project has been worth all the time and money it has consumed. Figures put together by Next INpact, quoted by TorrentFreak, indicate that during the decade of its existence, Hadopi has imposed the grand sum of €87,000 in fines, but cost French taxpayers nearly a thousand times more -- €82 million. Against that background of staggering inefficiency and inefficacy, the following words in the introduction to Hadopi's annual report (pdf), written by the organization's president, Denis Rapone, ring rather hollow:
There are many ways to respond to a cease and desist notice over trademark rights. The most common response is probably fear-based capitulation. After all, trademark bullying works for a reason, and that reason is that large companies have access to large legal war chests while smaller companies usually just run away from their own rights. Another response is the aggressive defenses against the bullying. And, finally, every once in a while you get a response so snarky in tone that it probably registers on the richter scale, somehow.The story of how a law firm called Southtown Moxie responded to a C&D from a (maybe?) financial services firm called Financial Moxie is of the snark variety. But first, some background.
Summary:Though social media networks take a wide variety of evolving approaches to their content policies, most have long maintained relatively broad bans on nudity and sexual content, and have heavily employed automated takedown systems to enforce these bans. Many controversies have arisen from this, leading some networks to adopt exceptions in recent years: Facebook now allows images of breastfeeding, child-birth, post-mastectomy scars, and post-gender-reassignment surgery photos, while Facebook-owned Instagram is still developing its exception for nudity in artistic works. However, even with exceptions in place, the heavy reliance on imperfect automated filters can obstruct political and social conversations, and block the sharing of relevant news reports.One such instance occurred on June 11, 2020 following controversial comments by Australian Prime Minister Scott Morrison, who stated in a radio interview that “there was no slavery in Australia”. This sparked widespread condemnation and rebuttals from both the public and the press, pointing to the long history of enslavement of Australian Aboriginals and Pacific Islanders in the country. One Australian Facebook user posted a late 19th century photo from the state library of Western Australia, depicting Aboriginal men chained together by their necks, along with a statement:
As the coronavirus pandemic continues, nobody really knows what's going to happen — especially if kids start going back to school. Statistical models of the possibilities abound, but this week we're joined by some people who are taking a different approach: John Cordier and Don Burke are the founders of Epistemix, which is using a new agent-based modeling approach to figure out what the future of the pandemic might look like.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Orleans Parish District Attorney Leon Cannizzaro continues to get himself in legal hot water. Back in 2017, New Orleans journalistic outlet The Lens uncovered his office's use of fake subpoenas to coerce witnesses and crime victims into showing up to provide testimony and make statements.The documents weren't real. They had never been approved by a judge. But they still had the same threat of fines or imprisonment printed on them. Just like the real ones. But these threats were also fake -- no judge had given the office permission to lock these witnesses/victims up.Once this practice was exposed, the lawsuits began. The DA's office was sued multiple times by multiple plaintiffs. One suit -- filed by the MacArthur Justice Center -- demanded copies of every bogus subpoena issued by the DA's office. Another -- filed by the ACLU -- sought the names of every DA's office attorney who'd signed or sent one of these bogus subpoenas.Yet another lawsuit targeted the DA's office and the DA directly for violating the law and citizens' rights by issuing fake subpoenas. That one is still pending but DA Cannizzaro and his attorneys were denied immunity by the Fifth Circuit Court of Appeals, making it far more likely someone will be held personally responsible for cranking out fake legal paperwork.The MacArthur Center lawsuit continues. And it's more bad news for the DA, which has spent nearly a half-decade dodging the Center's public records requests.
Every minute, more than 500 hours of video are uploaded to YouTube, 350,000 tweets are sent, and 510,000 comments are posted on Facebook.Managing and curating this fire hose of content is an enormous task, and one which grants the platforms enormous power over the contours of online speech. This includes not just decisions around whether a particular post should be deleted, but also more minute and subtle interventions that determine its virality. From deciding how far to allow quack ideas about COVID-19 to take root, to the degree of flexibility that is granted to the President of the United States to break the rules, content moderation raises difficult challenges that lie at the core of debates around freedom of expression.But while plenty of ink has been spilled on the impact of social media on America’s democracy, these decisions can have an even greater impact around the world. This is particularly true in places where access to traditional media is limited, giving the platforms a virtual monopoly in shaping the public discourse. A platform which fails to take action against hate speech might find itself instrumental in triggering a local pogrom, or even genocide. A platform which acts too aggressively to remove suspected “terrorist propaganda” may find itself destroying evidence of war crimes.Platforms’ power over the public discourse is partly the result of a conscious decision by global governments to outsource online moderation functions to these private sector actors. Around the world, governments are making increasingly aggressive demands for platforms to police content which they find objectionable. The targeted material can range from risqué photos of the King of Thailand, to material deemed to insult Turkey’s founding president. In some instances, these requests are grounded in local legal standards, placing platforms in the difficult position of having to decide how to enforce a law from Pakistan, for example, which would be manifestly unconstitutional in the United States.In most instances, however, moderation decisions are not based on any legal standard at all, but on the platforms’ own privately drafted community guidelines, which are notoriously vague and difficult to understand. All of this leads to a critical lack of accountability in the mechanisms which govern freedom of expression online. And while the perceived opacity, inconsistency and hypocrisy of online content moderation structures may seem frustrating to Americans, for users in the developing world it is vastly worse.Nearly all of the biggest platforms are based in the United States. This means not only that their decision-makers are more accessible and receptive to their American user-base than they are to frustrated netizens in Myanmar or Uganda, but also that their global policies are still heavily influenced by American cultural norms, particularly the First Amendment.Even though the biggest platforms have made efforts to globalize their operations, there is still a massive imbalance in the ability of journalists, human rights activists, and other vulnerable communities to get through to the U.S.-based staff who decide what they can and cannot say. When platforms do branch out globally, they tend to recruit staff who are connected to existing power structures, rather than those who depend on the platforms as a lifeline away from repressive restrictions on speech.For example, the pressure to crackdown on “terrorist content” inevitably leads to collateral damage against journalism or legitimate political speech, particularly in the Arab world. In setting this calculus, governments and ex-government officials are vastly more likely to have a seat at the table than journalists or human rights activists. Likewise, the Israeli government has an easier time communicating their wants and needs to Facebook than, say, Palestinian journalists and NGOs.None of this is meant to minimize the scope and scale of the challenge that the platforms face. It is not easy to develop and enforce content policies which account for the wildly different needs of their global user base. Platforms generally aim to provide everyone with an approximately identical experience, including similar expectations with regard to the boundaries of permitted speech. There is a clear tension between this goal and the conflicting legal, cultural and moral standards in force across the many countries where they operate.But the importance and weight of these decisions demands that platforms get this balancing right, and develop and enforce policies which adequately reflect their role at the heart of political debates from Russia to South Africa. Even as the platforms have grown and spread around the world, the center of gravity of these debates continues to revolve around D.C. and San Francisco.This is the first in a series of articles developed by the Wikimedia/Yale Law School Initiative on Intermediaries and Information appearing here at Techdirt Policy Greenhouse and elsewhere around the internet—intended to bridge the divide between the ongoing policy debates around content moderation, and the people who are most impacted by them, particularly across the global south. The authors are academics, civil society activists and journalists whose work lies on the sharp edge of content decisions. In asking for their contributions, we offered them a relatively free hand to prioritize the issues they saw as the most serious and important with regard to content moderation, and asked them to point to areas where improvement was needed, particularly with regard to the moderation process, community engagement, and transparency.The issues that they flag include a common frustration with the distant and opaque nature of platforms’ decision-making processes, a desire for platforms to work towards a better understanding of local socio-cultural dynamics underlying the online discourse, and a feeling that platforms’ approach to moderation often did not reflect the importance of their role in facilitating the exercise of core human rights. Although the different voices each offer a unique perspective, they paint a common picture of how platforms’ decision making impacts their lives, and of the need to do better, in line with the power that platforms have in defining the contours of global speech.Ultimately, our hope with this project is to shed light on the impacts of platforms’ decisions around the world, and provide guidance on how social media platforms might do a better job of developing and applying moderation structures which reflect their needs and values of their diverse global users.Michael Karanicolas is a Resident Fellow at Yale Law School, where he leads the Wikimedia Initiative on Intermediaries and Information as part of the Information Society Project. You can find him on twitter at @M_Karanicolas.
The Ultimate Leadership and Stress Management Bundle has 9 courses to help you develop the tools you need to lead and empower your team. Courses focus on interpersonal skills, remote team management, time management and stress management. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
In the early days of the internet, there was no shortage of predictions insisting the emerging technology would be a bold new frontier of transformative change, ushering forth a golden era of connectivity free from the pesky befuddlement of incompetent government leaders, bad actors, and malicious overlords. This new frontier, we were told, would culminate in a fairer and more humane planet, unshackled from the petty hassles of the brick and mortar world, extracting us from our worst impulses as we marched, collectively, toward a better and more ethical future.Technological innovation, it would seem, was going to fix everything.This optimism certainly wasn't unwarranted. For those of us who cut our teeth on the advent of the internet (I spent much of my own youth on an Apple IIe at 300 baud, enamored with early bulletin board systems), the capacity for revolutionary change was obvious. It still is. But while there's certainly an endless list of examples showcasing the internet's incredible potential for positive, transformative cultural change and innovation, the last decade has witnessed a clear reckoning for those who seemingly believed the lesser angels of our nature wouldn't come along for the ride.Internet corporations so large, or so fused to the government itself, that they laugh off the intervention of world governments. Foreign and domestic propaganda efforts, often working in concert, geared toward sowing discord and division. Disinformation at scale so dangerous it helps spur genocide. Bogus missives so potent they can impact elections and the democratic process itself. Trolls; swatting; deep fakes; racist subreddits; live streamed mass shootings; Pinterest child porn; gamergate; millions getting dangerous health information from unqualified nitwits on YouTube.The core of many of these problems aren't new. In fact in many instances, they're as old as humanity itself. But they have mutated into dangerous new variants courtesy of technology, scale, naivete, and apathy. There's simply no escaping the fact we could have done a better job predicting their evolutionary impact, and establishing systems of oversight, transparency, and accountability that could have dulled many of their sharpest edges.As with Greenhouse, privacy edition, there are no easy answers here. Moderation at scale is utterly formidable. Doing it well at scale may be impossible. Every last policy decision comes with trade offs and a myriad of unforeseen consequences that need to be adequately understood before rushing face first into the fray. As the Section 230 debate makes abundantly clear, there's no shortage of bad faith or unworkable ideas that hold the potential to create far more problems than they profess to solve. Avoiding these pitfalls will require stopping, listening, and understanding one another -- American cultural anomalies to be sure.We're hopeful that the insights presented here from those on the front lines of the content moderation debate will help inform policy makers, the public, and experts alike. And we're hopeful the pieces make some small contribution to the foundation of a better, kinder, more equitable internet more in line with our original good intentions. Techdirt Greenhouse is a conversation, so if you've got expertise in the content moderation arena, or see pieces you'd like to respond to, please feel free to reach out.
We've noted repeatedly that not only did the Trump FCC and DOJ rubber stamp the controversial T-Mobile and Sprint merger, they willfully ignored data showing the deal would result in high prices, lower overall sector pay, fewer jobs, and less overall competition. As most objective antitrust and telecom experts predicted, the ink was barely dry on the deal before the pink slips started to arrive. The higher rates will still likely take a few more years to materialize as the remaining three industry players (T-Mobile, AT&T, and Verizon) perfect their ability to pretend to compete on price without actually doing so.Over at the DOJ, top "antitrust enforcer" Makan Delrahim not only ignored hard data and critics of the deal, he actively helped guide T-Mobile executives to deal completion (if you're unaware, folks tasked with leading the governments antitrust enforcement efforts most assuredly should not be doing that).To try and justify this grotesque regulatory capture, the DOJ came up with a bad idea: it would require T-Mobile offload some spectrum and its Boost Mobile prepaid brand to Dish Network, which would then, theoretically, try and build a replacement carrier for Sprint over a period of 7 years. For much of that time Dish will simply operate as a glorified MVNO (mobile virtual network operator) on T-Mobile's network and be subject to T-Mobile whims.The problem: Dish has a long history of hoarding valuable spectrum and promising to build a wireless network and then, you know, not doing that (just ask pre-merger T-Mobile). The other problem: shepherding such a deal to completion requires the current FCC (rabidly proud of "hands off," "light touch" regulation) to aggressively nanny this deal to completion, something that simply isn't in Ajit Pai's ideological nature. The remaining three players in the space (T-Mobile, AT&T, Verizon) have every motivation to try and scuttle the creation of this fourth competitor to avoid having to actually (gasp) compete on price.Throughout, there have been questions about just how serious Dish is. Again, the company has a long history of buying up valuable spectrum and then doing absolutely nothing with it. Dish's spectrum holdings are extremely valuable, and critics have long wondered if the company is just stringing feckless U.S. regulators along until it can sell its spectrum at a steep premium.Whether Dish is serious still isn't really a settled question, but the company continues to give every impression it may genuinely want to disrupt wireless as a survival strategy in the wake of its struggling traditional TV business. That manifested this week in the acquisition of Tucows' Ting, a small MVNO that had been making slow inroads as a minor player in the wireless space. In a blog post, Ting insists that nothing will really change at the small operation now that it has been acquired by a major corporation engaged in (hopefully) a massive disruption play:
Turkey's president, Recep "Gollum" Erdogan, continues to use legislation to silence everyone that might possibly criticize or mock him. This has been an ongoing process, one that keeps getting worse with every iteration. A failed coup didn't help calm things down in Turkey, which is apparently hoping to pass China and take the top spot on the "journalists jailed" chart.The latest law has a supposedly noble goal, but there's nothing noble about the propelling force behind it. The EFF reports another law giving the government even more censorship powers has been passed, thanks to Erdogan's inability to handle criticism.
If ever there were an artist who seems to straddle the line of aggressive intellectual property enforcement, that artist must surely be Taylor Swift. While Swift has herself been subject to silly copyright lawsuits, she has also been quite aggressive and threatening on matters of intellectual property and defamation when it comes to attacking journalists and even her own fans over trademark rights. So, Taylor Swift is, among other things, both the perpetrator and the victim of expansive permission culture.You would think someone this steeped in these concerns would be quite cautious about stepping on the rights of others. And, yet, it appears that some of the iconography for Swift's forthcoming album and merchandise was fairly callous about those rights for others.
Senator Lindsey Graham very badly wants to push the extremely dangerous EARN IT Act across the finish line. He's up for re-election this fall, and wants to burnish his "I took on big tech" creds, and sees EARN IT as his path to grandstanding glory. Never mind the damage it will do to basically every one. While the bill was radically changed via his manager's amendment last month, it's still an utter disaster that puts basically everything we hold dear about the internet at risk. It will allow for some attacks on encryption and (somewhat bizarrely) will push other services to more fully encrypt. For those that don't do that, there will still be new limitations on Section 230 protections and, very dangerously, it will create strong incentives for internet companies to collect more personal information about every one of their users to make sure they're complying with the law.It's a weird way to "attack" the power of big tech by forcing them to collect and store more of your private info. But, hey, it's not about what's actually in the bill. It's about whatever bullshit narrative Graham and others know the press will say is in the bill.Either way, we've heard that Graham and his bi-partisan supporter for EARN IT, Senator Richard Blumenthal, are looking to rush EARN IT through with no debate, via a process known as hotlining. Basically, it's a way to try to get around any floor debate, by asking every Senator's office (by email, apparently!) if they would object to a call for unanimous consent. If no Senator objects, then they basically know they can skip debate and get the bill approved. If Senators object, then (behind the scenes) others can start to lean on (or horse trade) with the Senators to get the objections to go away without it all having to happen on the floor of the Senate. In other words, Graham and Blumenthal are recognizing that they probably can't "earn" the EARN IT Act if it has to go through the official process to have it debated and voted on on the floor, and instead are looking to sneak it through when no one's looking.While Senator Wyden (once again) has said he'll do whatever he can to to block this, it would help if other Senators would stand up as well. Here's what Wyden had to say about it:
Guys, I'm beginning to get the feeling that Senator Josh Hawley doesn't like Section 230. I mean, beyond creating a laughably inaccurate and misleading "True History of Section 230," Hawley has now introduced at least four bills to modify or end Section 230. Perhaps if he introduces 10 he'll get a free one. His latest, introduced last week would remove Section 230 for any internet company that has "behavioral advertising." Now I've been skeptical of the value of behavioral advertising in many cases, but this new bill is absurd.Basically what the bill would do is say that any site that uses behavioral advertising loses 230 protections:
I guess those "rule of law" folks don't care if a law is any good or will do what it intends to do without causing significant collateral damage. All they care about is that it's a law and, as a law, everyone should just subject themselves to it with a minimum of complaining.The Attorney General is one of those "rule of law" people. Sure, he works for an administration that doesn't seem to care much about laws, propriety, or basic competence, but he's the nation's top cop, so laws and rules it is.Bill Barr wants holes in encryption. He wants them so badly he's making up new words. "Warrant-proof encryption" isn't any different than regular encryption. It only becomes "warrant-proof" when the DOJ and FBI are talking about it, as though it was some new algorithm that only scrambles communications and data when the presence of a warrant is detected.Far too many people in Washington think encryption is only valuable to criminals. Bills are in the works to compel encryption-breaking/backdooring. Some even handcuff these demands to Section 230 immunity -- a 2-for-1 special on shoveled shit straight from the federal government to Americans' favorite platforms and services.Given how much the AG loves broad, abusive laws, it's no surprise he's going on the record to congratulate the author of another terrible law on her newest terrible piece of legislation.
The Professional's Guide to Photography Bundle has 8 courses to help you learn about photography and photo editing. You'll learn about aperture, shutter speed, ISO, lighting, composition, depth of field, flash, what your DSLR can do, and much more. Other courses cover studio and wedding photography. You'll also learn how to improve your photos, help people look at their very best, and share your ideas with the world through photo editing. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The whole TikTok story keeps getting dumber. While we still believe that the weird moral panic about TikTok is overblown and Trump's threat to ban the company from the US over the weekend is crazy and unconstitutional, people are still taking things seriously. On Friday evening Trump said that he planned to issue an executive order banning the company (which is not quite how any of this works). He didn't actually do this. He also said he was against an American company like Microsoft buying TikTok, which apparently put the ongoing acquisition talks on hold.Instead, Microsoft had to call up the President and grovel before him, before he apparently told the company it had until September 15th to work out a deal, and if no deal was made by then, he'd again "ban" TikTok (again, an almost certainly unconstitutional move that would not work). Still, it would be a mess, and I'm sure TikTok and ByteDance (the company's current owner) knows that it's probably best to take what it can get from Microsoft while it can. Of course, Microsoft also knows that it's in a good position because ByteDance has a ticking time bomb on its hands, and the value of TikTok could decrease drastically on September 15th if no deal is made. Even if a ban is unconstitutional, fighting it will take time and money.Also, it's not clear if there would be much competition for TikTok from anyone other than Microsoft. I mean, Facebook and Mark Zuckerberg would love to buy it, but pretty much everyone knows that there's no way in hell that would get approved by the Justice Department. Even if Facebook weren't already facing a shit ton of antitrust scrutiny from Congress, the FTC, and state Attorneys General, the Bill Barr DOJ has made it clear that it will abuse antitrust to hurt companies Trump is mad at. And contrary to some conspiracy theories, Trump and friends still insist that Facebook is "biased" against them (it's not). So that wipes out most of the large internet companies that would actually have the capital to buy TikTok. There could be a surprise buyer, but it remains a fairly limited market, at best.Still, things went from just stupid to downright bizarre on Monday when President Trump announced that he thought most of the money from a TikTok acquisition should go to the US Treasury:
After initially obtaining an FCC license for up to 1 million Starlink satellite broadband customers in the United States, Space X last week quadrupled that estimate, and is now hopeful that 5 million Americans will sign up for service. To be clear: Space X's service won't be taking on traditional broadband providers in major metro areas. Instead, the company will be using thousands of low orbit satellites (with lower latency than traditional satellite broadband) to deliver marginally decent service to under-served rural Americans, assuming it winds up being profitable longer term.In a country where an estimated 42 million can't get any broadband at all (during a raging pandemic, no less), any little improvement helps. By and large, most major outlets have framed Starlink as a massive disruption of the broadband industry:
Free speech keeps getting freer in Tennessee. The state was once home to a host of vexatious defamation lawsuits -- including one where someone subjected to mild criticism sued a journalist over things someone else said. Thanks to the state's new anti-SLAPP law, litigation is slightly less vexatious these days.But there are still state laws posing threats to free speech by criminalizing stuff the First Amendment says is perfectly acceptable. Tennesseans for Sensible Election Laws (represented by Daniel Horwitz, whose work has made multiple headlines here at Techdirt) sued the state over a campaign law that made it a misdemeanor to publish false information about candidates.The statute says this:
When we released our CIA: Collect It All card game based on a declassified CIA training card game, we had included a fun little Easter egg in there, with help from Jon Callas, who helped create modern day encryption. So far, I believe a grand total of... two people have found it, solved it, and told me about it (though it's possible many more have done so). That was neat, but we had nothing to give them beyond the satisfaction of having solved the puzzle. It seems that others have gone much, much farther with this idea.Five years ago, Tarah Wheeler put together a big Kickstarter for the book Women in Tech, with advice/ideas/thoughts/stories from a variety of successful women in the tech field.Five years after publishing that book, Wheeler has now revealed that she flooded the book with hidden puzzles, and while releasing the book itself was a massively difficult project, the fact that a bunch of people found and worked on the puzzles was part of what made it all worth it:
Earlier today we wrote about how Ajit Pai was pushing ahead with the Commerce Department's silly FCC petition regarding a re-interpretation of Section 230 of the Communications Decency Act. We noted that it wouldn't actually be that hard to just say that the whole thing is unconstitutional and outside of the FCC's authority (which it is). Some people have pushed back on us saying that if Pai didn't do this, Trump would fire him and promote some Trump stan to push through whatever unconstitutional nonsense is wanted.Well, now at least there's some evidence to suggest that Trump also views the FCC -- a supposedly "independent" agency -- as his personal speech police. Of the Republican Commissioners, Brendan Carr has been quite vocal in his Trump boot-licking, especially with regards to Section 230. He's been almost gleeful in his pronouncements about how evil "big tech" is for "censoring conservatives," and how much he wants to chip away at Section 230. Pai has been pretty much silent on the issue until the announcement today. But the other Republican Commissioner, Mike O'Rielly, has at least suggested that he recognizes the Trump executive order is garbage. Six weeks ago he said he hadn't done his homework yet, but suggested he didn't think Congress had given the FCC any authority on this matter (he's right).Just last week, during a speech, he made it pretty clear where he stood on this issue. While first saying he wasn't necessarily referencing the Trump executive order, he said the following:
The more the DHS inserts itself into the ongoing civil unrest, the more unrestful it gets. President Trump sent his federal forces to Portland, Oregon -- the first of many "democrat" cities the president feels are too violent/unrestful -- to protect federal buildings from violent graffiti outbursts or whatever. When the DHS arrived -- represented by the CBP, ICE, US Marshals, and other federal law enforcement -- it announced its arrival with secret police tactics straight out of the Gestapo playbook.Since that wasn't martial state enough, the federal officers turned things up, opening fire on journalists and legal observers. Literally. Local journalists were tear gassed, hit with pepper spray/pepper balls, and shot with "non-lethal" projectiles. The journalists and observers sued the federal government, securing a restraining order forbidding federal officers from continuing to violate the Constitution. Federal officers refused to stop (their) rioting and now may face sanctions for their actions. They will definitely be facing additional lawsuits since the restraining order made it clear willful violators would not be granted qualified immunity.As if all of this wasn't enough, news leaked out that DHS was compiling "intelligence reports" on local journalists, as well as journalists located elsewhere in the nation who had published leaked DHS documents. One day after breaking the news about the journalist-targeting "intelligence reports," the Washington Post broke more news -- again with the aid of a leaked DHS document. This one shows the DHS is (still) on the wrong side of the First Amendment. It also appears to show the agency lying to its oversight.
We've mentioned at great length how Trump's executive order to more heavily "regulate" social media is an unworkable joke. It attempts to tackle a problem that doesn't exist ("Conservative censorship") by attacking a law that actually protects free speech (Section 230), all to be enforced by agencies (like the FCC) that don't actually have the authority to do anything of the sort. You can't overrule the law by executive order or regulatory fiat, nor can you ignore the Constitution. The EO is a dumb joke by folks who don't understand how any of this works, and it should be treated as such.Instead, most press coverage of the move is still somehow framed as "very serious adult policy," despite being little more than a glorified brain fart.The FCC also knows the order is unworkable garbage that flies directly in the face of years of espoused (government hands off) ideology by Ajit Pai, Brendan Carr and friends. And yet, terrified of upsetting dear leader, Pai issued a totally feckless statement on Monday stating the EO would be pushed through the rule-making process, pretending as if this was all just ordinary, sensible tech policy:
Early in 2019, we wrote about stream-ripping site FLVTO.biz winning in court against the record labels on jurisdictional grounds. The site, which is Russian and has no presence in the United States, argued that the courts had no jurisdiction. The RIAA labels argued against that, essentially claiming that because Americans could get to the site it therefore constituted some kind of commercial contract, even though no actual contract existed. Instead, the site merely makes money by displaying advertisements. The court very much agreed and dismissed the case.On appeal in May, however, the case was sent back to the lower court.
The Accredited Agile Project Management Bundle by SPOCE is designed to equip users with the know-how they need to master Agile project management, PRINCE2 Project Management, and PRINCE2 Agile Project Management. You'll learn the skills needed for managing and delivering successful projects. You'll also gain an understanding of risk management, planning, handling change, and more. It's on sale for $99.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Is there anything the DHS can't turn into a debacle while pretending to secure the homeland? It would appear it's impossible for America's least essential security agency to move forward without stepping in something.As protests in Portland neared the 60-day mark, the DHS was tasked with protecting federal property like courthouses and… um… statues. ICE, CBP, Federal Protective Services, and US Marshals all arrived in Portland ready to go to war with people exercising their First Amendment rights. You only have one chance to make a first impression, and the unidentified officers from unknown agencies throwing protesters into unmarked vehicles was one hell of a first impression.The federal agencies went to war, firing tear gas and projectiles at protesters, rioters, journalists, and legal observers. It made no difference to the DHS which was which. But it did make a difference to a federal judge, who issued a temporary restraining order forbidding federal officers from attacking, gassing, assaulting, or arresting journalists and observers who were just trying to do their jobs.The federal officers immediately violated the restraining order. Or, more accurately, they never stopped doing the stuff that earned them the restraining order in the first place. Apparently, the DHS feels it hasn't violated First Amendment rights hard enough. The latest black eye for the DHS is more targeting of journalists, this time with surveillance.
This week, several of our top comments come in response to publisher Ken Whyte moaning about libraries, starting with our anonymous first place winner for insightful:
Five Years AgoThis week in 2014, we saw a judge slam a sheriff for an attack on Backpage that raised serious first amendment questions, and a student succeed after an eight-year legal battle against a university over being expelled for speech. On the other side of the free speech coin, we saw the cops shut down a hologram concert because they didn't like a rapper's lyrics, James Woods sue a random Twitter user for $10-million, and of course Donald Trump continue his lawsuit against Univision (and that post contains our first mention of a certain lawyer, with the now-entertaining phrasing of "apparently, it's some guy named Michael Cohen, who isn't just out of his depth on stuff, but he appears to be actively making things worse.")We also saw a huge bombshell in the lawsuit over the copyright status of Happy Birthday, with new evidence showing the song is in the public domain that Warner Music quickly tried to muddy the waters around.Ten Years AgoThis week in 2010, we wondered why the press was still blindly believing entertainment industry "studies", and how there were new copyrights being claimed on work by an artist who died 70 years ago. Copyright was interfering with technology both old-old and new-old, disrupting the preservation of decaying player piano rolls as well as obsolete video games. And the new round of DMCA anti-circumvention exemptions surprised everyone by including phone jailbreaking, though it left out plenty of good suggestions too.Fifteen Years AgoThis week in 2005, the anti-open-WiFi brigade was stirring up FUD about cantennas and the press was taking the bait. ISP Telus learned all about the Streisand Effect by blocking its customers from reaching websites supporting its employees in their union battle against the company, while offering weak excuses, and we were not exactly shocked to learn that Qualcomm founder Irwin Jacobs doesn't like muni-WiFi. Canada put the final nail in the idea of an iPod tax, one UK court showed it wasn't fooled by ridiculous claims of losses to software piracy, and yet another study showed that file sharers are the music industry's best customers.
This seemed fairly inevitable, after it became quite clear that the Twitter hack from a few weeks ago was done by teen hackers who didn't seem to do much to cover their tracks, but officials in Florida announced the arrest of a Florida teenager for participating in the hack, followed by the DOJ announcing two others as well -- a 19 year old in the UK and a 22 year old in Florida.As for why the first announced was separate and done by Florida officials, it appears that it involved a 17-year-old, and apparently it was easier to charge him as an adult under state laws, rather than under federal law, as with the other two.
Summary: With news breaking so rapidly, it’s possible that even major newspapers or official sources may get information wrong. Social media sites, like Twitter, need to determine how to deal with “news” tweets that later turn out to be misleading -- even when coming from major news organizations, citing official government organizations.With widespread protests around the United States calling attention to police brutality and police activity disproportionately targeting the black community, the NY Post tweeted a link to an article discussing an internal communication by the NY Police Department (NYPD) warning of “concrete disguised as ice cream cups” that were supposedly found at some of the protests, with the clear implication being that this was a way to disguise items that could be used for violence or property destruction.The article was criticized widely by people who pointed out that the items in fact appear to be part of a standard process for testing concrete mixtures, with the details of each mixture written on the side of the containers. Since these were found at a construction site, it seems likely that the NYPD’s “alert” was, at best, misleading.In response to continuing criticism, the NY Post made a very minor edit to the story, noting only that the markings on the cups make them “resemble concrete sample tests commonly used on construction sites.” However, the story and its title remained unchanged and the NY Post retweeted it a day later -- leading some to question why the NY Post was publishing misinformation, even if it was accurately reporting the content of an internal police memo.Questions for Twitter:
There's been a panic over the last few weeks about TikTok, the rapidly growing social network that is owned by the Chinese internet giant ByteDance (by way of history: ByteDance purchased a startup called Musical.ly in 2017, and rebranded it TikTok in 2018, and then it started growing like crazy). A few weeks ago, the Trump administration started suggesting it would ban TikTok, and a story was built up around the idea that TikTok was some sort of national security threat, despite very little evidence to support this. A separate narrative was simply that Trump was annoyed that TikTok kids made Trump look bad in Tulsa by reserving a bunch of tickets to his rally that they never intended to use.Either way, it was announced today that the Trump administration was likely to order ByteDance to shed TikTok and immediately with that was the news that Microsoft was a likely buyer.The whole thing is kind of silly. The most compelling argument I've seen for why the US should ban TikTok came from Ben Thompson at Stratechery, who more or less says (this is a very simplified version of his argument, so read the whole thing) that since China mostly bans US apps and services within its Great Firewall, there's an uneven playing field. I tend to lean slightly the other way: that supporting more freedom is a better approach. It feels like banning TikTok or forcing a sale is stooping to their level, and even validating their approach. And that worries me. And, yes, in the short run it puts us at a slight disadvantage on the global playing field, but frankly, US internet companies are still doing pretty damn well. The idea that we need to force a sale like this sets a questionable and potentially dangerous precedent -- suggesting we don't think that American firms can really compete.On top of that, if the concern is about China, then the fact that most of our network and computer equipment is built in China would seem like maybe a larger concern? But beyond a weird, similar freakout about Huawei, no one seems to be taking any serious interest in that. And that doesn't get into the fact that US intelligence has leaned heavily on US internet companies to try to get access to global data -- meaning that there does seem to be a bit of US exceptionalism built into all of this: it's okay when we do it, but an affront if any other government might do the same thing...Separately, this whole situation with TikTok and Microsoft demonstrates the pure silliness of the antitrust hearing in the House earlier this week. Note that there were claims that the four companies there represented "monopoly power." And yet, just days later, we're talking about how a recent entrant in the market, which has grown up quickly, and which Facebook certainly sees as a threat, is so powerful on the internet that it needs to be sold from its Chinese owners -- and the leading candidate to purchase it, Microsoft, is not even one of the "too powerful" companies who were on the panel.If a new entrant can rise up so quickly to be a "threat" and then needs to be purchased by another giant... it certainly suggests that the internet market still remains pretty vibrant, and not at all locked down by a few monopolies.
Update: Sooo... we already have a bunch of updates on this story. Trump has said he's banning TikTok entirely and is "against" allowing a US company to buy TikTok. Below is the original post, with only a slight clarification regarding Ben Thompson's thoughts on TikTok, which I didn't present very clearly in the original. Then, beneath the post I'll have more thoughts on Trump's comments.There's been a panic over the last few weeks about TikTok, the rapidly growing social network that is owned by the Chinese internet giant ByteDance (by way of history: ByteDance purchased a startup called Musical.ly in 2017, and rebranded it TikTok in 2018, and then it started growing like crazy). A few weeks ago, the Trump administration started suggesting it would ban TikTok, and a story was built up around the idea that TikTok was some sort of national security threat, despite very little evidence to support this. A separate narrative was simply that Trump was annoyed that TikTok kids made Trump look bad in Tulsa by reserving a bunch of tickets to his rally that they never intended to use.Either way, it was announced today that the Trump administration was likely to order ByteDance to shed TikTok and immediately with that was the news that Microsoft was a likely buyer.The whole thing is kind of silly. The most compelling argument I've seen for why the US should ban TikTok came from Ben Thompson at Stratechery, who more or less says (this is a very simplified version of his argument, so read the whole thing) that since China is engaged in a war to impose its ideology on the world, and that it will make use of TikTok and other services to effectively attack Western liberalism, it is effectively dangerous to allow it to operate in the west under Chinese ownership. He supports selling TikTok off to a American company, or barring that, banning the app in the West. I tend to lean the other way: to me, banning TikTok strikes me as effectively proving China's views on liberalism, and allowing them to claim hypocrisy on the west, and use these actions to justify its own actions.On top of that, if the concern is about China, then the fact that most of our network and computer equipment is built in China would seem like maybe a larger concern? But beyond a weird, similar freakout about Huawei, no one seems to be taking any serious interest in that. And that doesn't get into the fact that US intelligence has leaned heavily on US internet companies to try to get access to global data -- meaning that there does seem to be a bit of US exceptionalism built into all of this: it's okay when we do it, but an affront if any other government might do the same thing...Separately, this whole situation with TikTok and Microsoft demonstrates the pure silliness of the antitrust hearing in the House earlier this week. Note that there were claims that the four companies there represented "monopoly power." And yet, just days later, we're talking about how a recent entrant in the market, which has grown up quickly, and which Facebook certainly sees as a threat, is so powerful on the internet that it needs to be sold from its Chinese owners -- and the leading candidate to purchase it, Microsoft, is not even one of the "too powerful" companies who were on the panel.If a new entrant can rise up so quickly to be a "threat" and then needs to be purchased by another giant... it certainly suggests that the internet market still remains pretty vibrant, and not at all locked down by a few monopolies.Updated thoughts: So that's the original above. Now that Trump is saying he really is going to ban TikTok and is against its sale, there are multiple issues raised. Trump seems to think he can do this under his emergency economic powers (effectively declaring TikTok to be a national security issue -- the same "tool" he used to impose tariffs on China without Congressional approval). If he goes that route, there will be lawsuits -- and there will be significant Constitutional issues raised. The Supreme Court has in the past declared software speech, in Brown v. Entertainment Merchants Association (the case about whether or not the government could regulate video games and require age warnings). And, in the 2nd Circuit, a somewhat frustrating decision regarding the publishing of some code that would break DRM, Universal v. Corley, it is at least notable that the Court made a clear statement that software is protected under the 1st Amendment:
Last month, we wrote about the big publishers suing the Internet Archive over its Controlled Digital Lending (CDL) program, as well as its National Emergency Library (NEL). As we've explained over and over again, the Internet Archive is doing exactly what libraries have always done: lending books. The CDL program was structured to mimic exactly how a traditional library works, with a 1-to-1 relationship between physical books owned by the library and digital copies that can be lent out.While some struggled with the concept of the NEL since it was basically just the CDL, but without the 1-to-1 relationship (and thus, without wait lists), it seemed reasonably defensible: nearly all public libraries at the time had shut down entirely due to the COVID-19 pandemic, and the NEL was helping people who otherwise would never have had access to the books that were sitting inside libraries, collecting dust on the inaccessible shelves. Indeed, plenty of teachers and schools thanked the Internet Archive for making it possible for students to still read books that were stuck inside locked up classrooms. But, again, this lawsuit wasn't just about the NEL at all, but about the whole CDL program. The publishers have been whining about the CDL for a while, but hadn't sued until now.Of course, the reality is that the big publishers see digital ebooks as an opportunity to craft a new business model. With traditional books, libraries buy the books, just like anyone else, and then lend them out. But thanks to a strained interpretation of copyright law, when it came to ebooks, the publishers jacked up the price for libraries to insane levels and kept putting more and more conditions on them. For example, Macmillan, for a while, was charging $60 per book -- with a limit of 52 lends or two years of lending, whichever came first. And then you'd have to renew.Basically, publishers were abusing copyright law to try to jam down an awful and awfully expensive model on libraries -- exposing how much publishers really hate libraries, while pretending otherwise.Anyway, the Internet Archive has filed its response to the lawsuit, which does the typical thing of effectively denying all of the claims in the lawsuit (though I will admit that I chuckled to see them even "deny" the claim that the Archive's headquarters are in an "exclusive" part of San Francisco (FWIW, I'd probably describe the area more as "not easily accessible by public transit," but that doesn't quite make it exclusive -- or at least not any more exclusive than most of the rest of SF)).
You can't always pick your fighter for Constitutional challenges. Sometimes you're handed an unsympathetic challenger, which makes defending everyone's rights a bit more difficult because a lot of people wouldn't mind too much if this particular person's rights are limited. But that's not how rights work.A pretty lousy decision has been handed down by a Minnesota federal court. A challenge of two laws -- one city, one state -- has been met with a judicial shrug that says sometimes rights just aren't rights when there are children involved. (h/t Eric Goldman)The plaintiff is Sally Ness, an "activist" who appears to be overly concerned with a local mosque and its attached school. Ness is discussed in this early reporting on her lawsuit, which shows her activism is pretty limited in scope. Her nemesis appears to be the Dar Al-Farooq Center and its school, Success Academy. Ness feels there's too much traffic and too much use of a local public park by the Center and the school.Here's how she's fighting back against apparently city-approved use of Smith Park:
2019 saw a record number of consumers ditch traditional cable television. 2020 was already poised to be even worse, and that was before a pandemic came to town. The pandemic not only sidelined live sports (one of the last reasons many subscribe to traditional cable in the first place), it put an additional strain on many folks' wallets, resulting cord cutting spiking even higher.Among the hardest hit continues to be AT&T, whose customers have been fleeing hand over fist even with AT&T's attempt to pivot to streaming video. According to AT&T's latest earnings report, the company lost yet another 954,000 pay TV subscribers -- 886,000 from the company's traditional DirecTV and IPTV television offerings, and another 68,000 customers from the company's creatively named AT&T TV Now streaming video platform. All told, the losses left AT&T with 18.4 million video customers, including both Premium TV and AT&T TV Now, down from nearly 25.5 million in mid-2018.That's a fairly amazing face plant for a company that spent more than $150 billion on megamergers (DirecTV in 2015, Time Warner in 2018) in a bid to dominate the pay TV sector. The problem is the deals saddled AT&T with an absolute mountain of debt, which the company then attempted to extract from its customers in the form of relentless price hikes. During an economic crisis and pandemic:
The DOJ's Civil Rights Division has wrapped up an Obama-era probe into the Alabama prison system. Initiated in 2016, the investigation covers 13 prisons in the state, containing nearly 17,000 prisoners. What the DOJ found was widespread deployment of excessive force and a resolute lack of concern for inmates' well-being. (via Huffington Post)The report [PDF] notes that the Constitution (indirectly) gives inmates the right to be free from violence from other prisoners. The correctional facilities investigated here did almost nothing to prevent inmate-on-inmate violence.
If there is one thing that really needs to stop at the USPTO, it is the organization's continued approval for trademarks on terms that are basic geographic indicators. While this isn't just an American thing, far too often people are able to get trademark approvals for marks like area codes or the name of their home counties and towns. Given that the purpose of trademark law is to allow unique identifiers for the source of a good or service, marks like these are obvious perversions of the law.And yet it keeps happening. One recent example of this comes from Kentucky, where two Louisville breweries are in a fight over the use of the name of a neighborhood in that city, Butchertown.
The COVID-19 pandemic has spawned an infodemic, a vast and complicated mix of information, misinformation and disinformation.In this environment, false narratives – the virus was “planned,” that it originated as a bioweapon, that COVID-19 symptoms are caused by 5G wireless communications technology – have spread like wildfire across social media and other communication platforms. Some of these bogus narratives play a role in disinformation campaigns.The notion of disinformation often brings to mind easy-to-spot propaganda peddled by totalitarian states, but the reality is much more complex. Though disinformation does serve an agenda, it is often camouflaged in facts and advanced by innocent and often well-meaning individuals.As a researcher who studies how communications technologies are used during crises, I’ve found that this mix of information types makes it difficult for people, including those who build and run online platforms, to distinguish an organic rumor from an organized disinformation campaign. And this challenge is not getting any easier as efforts to understand and respond to COVID-19 get caught up in the political machinations of this year’s presidential election.Rumors, misinformation and disinformationRumors are, and have always been, common during crisis events. Crises are often accompanied by uncertainty about the event and anxiety about its impacts and how people should respond. People naturally want to resolve that uncertainty and anxiety, and often attempt to do so through collective sensemaking. It’s a process of coming together to gather information and theorize about the unfolding event. Rumors are a natural byproduct.Rumors aren’t necessarily bad. But the same conditions that produce rumors also make people vulnerable to disinformation, which is more insidious. Unlike rumors and misinformation, which may or may not be intentional, disinformation is false or misleading information spread for a particular objective, often a political or financial aim.Disinformation has its roots in the practice of dezinformatsiya used by the Soviet Union’s intelligence agencies to attempt to change how people understood and interpreted events in the world. It’s useful to think of disinformation not as a single piece of information or even a single narrative, but as a campaign, a set of actions and narratives produced and spread to deceive for political purpose.Lawrence Martin-Bittman, a former Soviet intelligence officer who defected from what was then Czechoslovakia and later became a professor of disinformation, described how effective disinformation campaigns are often built around a true or plausible core. They exploit existing biases, divisions and inconsistencies in a targeted group or society. And they often employ “unwitting agents” to spread their content and advance their objectives.Regardless of the perpetrator, disinformation functions on multiple levels and scales. While a single disinformation campaign may have a specific objective – for instance, changing public opinion about a political candidate or policy – pervasive disinformation works at a more profound level to undermine democratic societies.The case of the ‘Plandemic’ videoDistinguishing between unintentional misinformation and intentional disinformation is a critical challenge. Intent is often hard to infer, especially in online spaces where the original source of information can be obscured. In addition, disinformation can be spread by people who believe it to be true. And unintentional misinformation can be strategically amplified as part of a disinformation campaign. Definitions and distinctions get messy, fast.Consider the case of the “Plandemic” video that blazed across social media platforms in May 2020. The video contained a range of false claims and conspiracy theories about COVID-19. Problematically, it advocated against wearing masks, claiming they would “activate” the virus, and laid the foundations for eventual refusal of a COVID-19 vaccine.Though many of these false narratives had emerged elsewhere online, the “Plandemic” video brought them together in a single, slickly produced 26-minute video. Before being removed by the platforms for containing harmful medical misinformation, the video propagated widely on Facebook and received millions of YouTube views.As it spread, it was actively promoted and amplified by public groups on Facebook and networked communities on Twitter associated with the anti-vaccine movement, the QAnon conspiracy theory community and pro-Trump political activism.But was this a case of misinformation or disinformation? The answer lies in understanding how – and inferring a little about why – the video went viral.The video’s protagonist was Dr. Judy Mikovits, a discredited scientist who had previously advocated for several false theories in the medical domain – for example, claiming that vaccines cause autism. In the lead-up to the video’s release, she was promoting a new book, which featured many of the narratives that appeared in the Plandemic video.One of those narratives was an accusation against Dr. Anthony Fauci, director of the National Institute for Allergy and Infectious Diseases. At the time, Fauci was a focus of criticism for promoting social distancing measures that some conservatives viewed as harmful to the economy. Public comments from Mikovits and her associates suggest that damaging Fauci’s reputation was a specific goal of their campaign.In the weeks leading up to the release of the Plandemic video, a concerted effort to lift Mikovits’ profile took shape across several social media platforms. A new Twitter account was started in her name, quickly accumulating thousands of followers. She appeared in interviews with hyperpartisan news outlets such as The Epoch Times and True Pundit. Back on Twitter, Mikovits greeted her new followers with the message: “Soon, Dr Fauci, everyone will know who you ‘really are’.”This background suggests that Mikovits and her collaborators had several objectives beyond simply sharing her misinformed theories about COVID-19. These include financial, political and reputational motives. However, it is also possible that Mikovits is a sincere believer of the information that she was sharing, as were millions of people who shared and retweeted her content online.What’s aheadIn the United States, as COVID-19 blurs into the presidential election, we’re likely to continue to see disinformation campaigns employed for political, financial and reputational gain. Domestic activist groups will use these techniques to produce and spread false and misleading narratives about the disease – and about the election. Foreign agents will attempt to join the conversation, often by infiltrating existing groups and attempting to steer them towards their goals.[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]For example, there will likely be attempts to use the threat of COVID-19 to frighten people away from the polls. Along with those direct attacks on election integrity, there are likely to also be indirect effects – on people’s perceptions of election integrity – from both sincere activists and agents of disinformation campaigns.Efforts to shape attitudes and policies around voting are already in motion. These include work to draw attention to voter suppression and attempts to frame mail-in voting as vulnerable to fraud. Some of this rhetoric stems from sincere criticism meant to inspire action to make the electoral systems stronger. Other narratives, for example unsupported claims of “voter fraud,” seem to serve the primary aim of undermining trust in those systems.History teaches that this blending of activism and active measures, of foreign and domestic actors, and of witting and unwitting agents, is nothing new. And certainly the difficulty of distinguishing between these is not made any easier in the connected era. But better understanding these intersections can help researchers, journalists, communications platform designers, policymakers and society at large develop strategies for mitigating the impacts of disinformation during this challenging moment.Kate Starbird, Associate Professor of Human Centered Design & Engineering, University of WashingtonThis article is republished from The Conversation under a Creative Commons license. Read the original article.
Cook County (IL) Sheriff Tom Dart doesn't appear to know much about the First Amendment. He also doesn't understand Section 230. The grandstanding sheriff has graced Techdirt's page multiple times for suing online marketplaces and strong-arming payment companies in a severely misguided attempt to combat sex trafficking. His assaults on Craigslist and Backpage were terminated by federal courts, which reminded the sheriff of the existence of both Section 230 immunity and the First Amendment. Law enforcement officers may not be required to know the laws they enforce, but they should at least have some passing familiarity with the Constitution.Sadly, Sheriff Dart is still unfamiliar with Constitutional rights and protections. The sheriff's latest violation of rights stems from his decision to engage in pretrial detention practices that ignore the Constitution, as well as changes to local law. The Seventh Circuit Appeals Court doesn't care much for that. Its order [PDF], which allows plaintiffs to continue their lawsuit against the sheriff for violation of their rights, makes it clear the Sheriff's freelancing isn't doing the Fourth Amendment any favors.The opinion opens up with an idealistic quote from the Supreme Court.