Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2025-08-20 07:31
When Piracy Literally Saves Lives
Early on in the pandemic we wrote about how some makers of medical equipment, such as ventilators, were making it difficult to impossible to let hospitals fix their own ventilators. Many have used software locks -- DRM -- and refuse to give the information necessary to keep those machines online.And thus, it was only inevitable that piracy would step in to fill the void. Vice has the incredible story of a rapidly growing grey market for both hacked hardware and software to keep ventilators running:
Daily Deal: The Ultimate All-Access Business Bundle
The Ultimate All-Access Business Bundle has 12 courses to help you learn new business skills to boost your business towards success. You'll learn how to motivate employees, delegate tasks, manage personal finances, ace interviews, and more. The bundle is on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Appeals Court: Government Can't Keep Warrants Under Seal Just Because The Unsealing Process Is Difficult
The US government's law enforcement agencies really enjoy their unearned secrecy. They file warrants and subpoenas under seal, rendering entire dockets useless, if not completely invisible. And they maintain this secrecy for years, long after the underlying investigations have been closed.Some of the documents the government loves to file under seal include SCA [Stored Communications Act] warrants and pen register/trap-and-trace [PRTT] orders. Since 2013, Jason Leopold has been fighting the government's opacity. In 2016, he was joined by the Reporters Committee for Freedom of the Press in his attempt to get this blanket secrecy lifted.Arguing that courts still bear a presumption of openness and transparency, Leopold challenged the government's sealing of these records. In 2018, Judge Beryl Howell gave Leopold and the RCFP a partial win. It ordered the government to produce a sampling of all the records filed under seal.This wasn't enough. This only covered about 10% of the government's filings. Leopold and RCFP demanded more. The government responded that it would be too "burdensome" for it to dig into its dozens of sealed dockets/documents to see what could be released without harming long dead investigations or always-apparently-in-peril national security. Unfortunately, Judge Beryl Howell agreed.The DC Appeals Court has taken a look at the case and says the government needs to get busy handing stuff over. The "tradition of openness" covers these warrants and orders, and claiming compliance is difficult isn't a legitimate excuse for unjustified secrecy. Here's a taster from the opening of the decision [PDF]:
This Iowa Town Is Building An Open Access Fiber Broadband Network. Google Fiber Is Its First Customer
West Des Moines, Iowa this week announced that it would be building a massive, open access fiber network. The city is one of roughly 750 towns and cities that, frustrated by high prices, limited competition, and patchy availability of US broadband, have decided to instead build their own networks. Well, assuming that AT&T and Comcast haven't bribed your state officials to pass laws banning such efforts yet.West Des Moines' new network will be funded by taxable General Obligation bonds with low interest rates. It's too early to note what kind of speeds and prices will be on offer, but the city's announcement indicates that Google Fiber will be one of its first customers:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner on the insightful side is aerinai responding to the notion that it should be no big deal for foreign students to go home and take their classes remotely:
In Case You Missed It: The Return Of Nerd Harder Gear, Plus New Face Masks!
Nerd Harder gear is back, and face masks are available
Reverse Warrant Used In Robbery Investigation Being Challenged As Unconstitutional
Reverse warrants are being challenged in a criminal case involving a bank robbery in Virginia. These warrants (also called "geofence warrants") work in reverse, hence the nickname. Rather than seeking to search property belonging to a known suspect, investigators approach Google with a demand for information on all cellphones in a certain location at a certain time and work backwards from this stash to determine who to pursue as a suspect.Warrants require probable cause. And there doesn't seem to be much in the way of specific probable cause supporting these fishing expeditions. In this case, a bank was robbed in the late afternoon, resulting in plenty of people unrelated to the robbery being in the vicinity. This is all it takes to turn random people into suspects. And that has gone badly for investigators and, more importantly, innocent citizens on more than one occasion.Accused bank robber Okello Chatrie is challenging the reverse warrant that led to his arrest and indictment on federal charges. Chatrie hopes that warrant will be found deficient because it will make it easier to undo the damage he seemingly inflicted on himself after he was taken into custody.
Clearview Calls It Quits In Canada While Under Investigation By The Privacy Commissioner
Clearview AI -- the facial recognition service that gives all kinds of entities access to billions of face images scraped from the web -- is suddenly scaling back on its aggressive expansion plans. Once the plaything of billionaires, the unproven AI has been sold to retailers, fitness centers, police departments, and a handful of human rights violators.Soon after its existence was exposed by the New York Times, Clearview AI proceeded to announce its plans to expand worldwide -- something it hoped to achieve while being sued multiple times in its home country. Canada was in its sights, but not so much any more.In February, the Privacy Commissioner of Canada announced its office would be investigating Clearview AI and its still-not-independently-tested algorithm. This was triggered by numerous reports from journalists exposing how Clearview obtained its massive database of photos and how it was being used by government agencies and private entities.The Privacy Commissioner was not impressed with Clearview's apparent disregard for privacy, saying it was specifically looking into reports that Clearview was collecting and using personal info without consent.The investigation is still ongoing. But it appears Clearview is hoping to dodge the worst of it by removing itself from the Canadian market:
Twitch Faces Sudden Stream of DMCA Notices Over Background Music
There is obviously a great deal of action going on currently in the streaming world, spurred on in part by the COVID-19 crises that has many people at home looking for fresh content. Between the attempts to respond to social movements and tamp down "hateful" content to changes to the competitive landscape, streaming services are having themselves a moment. But with the sudden uptick in popularity comes a new spotlight painting a target on streaming platforms for everyone from scammers to intellectual property maximilists.Twitch has recently found itself a target for the latter, suddenly getting slammed with a wave of DMCA notices that appear to focus mostly on background music.
EFF, Orin Kerr Ask The Supreme Court To Prevent Turning The CFAA Into A Convenient Way To Punish Site Users, Security Researchers
As we reported here earlier, the Supreme Court is examining a CFAA case that could have far-reaching implications for… well, just about anyone who uses any online service, website, platform, or device. The case deals with a cop who abused his access privileges to run unapproved searches of government databases in exchange for cash. Obviously, this is far from an ideal case to argue against overbroad readings of an overbroad law. But, given the abuses perpetrated under this law, non-ideal cases will have to do if we don't want to be turned into criminals by generous judicial interpretations of the phrase "unauthorized access."Plenty of people and entities are lobbing briefs in the Supreme Court's direction, begging it to avoid criminalizing activities honest Americans participate in every day. It's not just about security research. But it definitely does affect researchers -- both those engaging in normal security research efforts, and those ignoring websites' terms of service in attempts to determine whether sites engage in biased practices.The EFF's brief [PDF] focuses mainly on the negative effects on researchers -- security and otherwise. It points out security researchers are often threatened with CFAA prosecutions/lawsuits just because entities engaging in lax security practices don't like having their lapses noticed, much less pointed out publicly. These researchers perform a valuable public service.
Daily Deal: The Epic Python Developer Certification Bundle
The Epic Python Developer Certification Bundle has 12 courses aimed at anyone having little or no experience in coding and a great desire to start learning Python from scratch. This hands-on training takes you from "Hello World!" to advanced Python topics in just a few hours. You will then put your knowledge into practice by answering quizzes, exercises, and doing the actual coding. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
What That Harper's Letter About Cancel Culture Could Have Said
Earlier this week I wrote about the open letter that was published in Harper's, signed by around 150 very prominent writers/thinkers. My response to it was to heavily criticize both the premise and the specifics in the letter, and to argue that it sought to do the very thing it claimed to be against. That is, it presented itself as support for free speech and counterspeech, and against attempts to shut down speech -- and yet, almost all of the (deliberately vague) examples they pointed to were not examples of shutting down speech, but rather examples of facing consequences from speech and counterspeech itself. The open letter could -- and in many cases was -- read to basically say "we should be able to speak without professional consequences."Some people liked my response, and some people hated it. The debate has raged on, and that's cool. That's what we should be supporting, right? More debate and speech.Many people are referring to the letter as being about "cancel culture," even though the letter itself never uses the phrase. But everyone recognizes that the concept is what's at the core of the letter: the idea that someone will say something that "the mob" considers beyond the pale, and suddenly they're "cancelled." We'll get to how realistic that actually is shortly.But part of the problem with the letter was that it was written in terms that could be used to both condemn overreaction by "mob" voices on Twitter and be used by certain people to say "stop criticizing my bad ideas so vociferously." It provides nothing of consequence to anyone trying to distinguish between the two, and thus when some assumed it was for the purposes of the latter, rather than the former, that should impeach the drafting of the letter itself, rather than its critics. Still, that makes the letter at best useless and at worst, capable of being used not in support of free speech, but as a tool to condemn counterspeech and consequences.Some well meaning critics challenged my criticism of the post on a few grounds that are at least worth considering. First, was the argument that my post imputes motives to the signatories that were unfair. And I'll grant that criticism. Indeed, quite often lately, I've found that when people leap to assume the motives of others, that's often when debates and discussions go off the rails. I'm just as guilty of that as anyone else, and I should try to be better about that. But there's a flipside to that argument as well, which is that there are people out there who purposely engage in bad faith arguments, and go ballistic when you call them on that, insisting that you can't impute such bad faith into their argument based solely on the words that they spoke (though, often by ignoring nearly all of the contextual relevancy that makes their bad faith evident).In other words, there certainly are mixed motives among the signatories, and I'd argue that some signed on in good faith in the belief that the world really is being pushed by illiberal forces that are shutting down realms of speech, but also those who just seem to be upset that people are calling out their bad ideas and they're suffering the consequences for it. I focused on the latter, when a more charitable read perhaps should have focused on -- or at least acknowledged -- the former.And as someone who has spent decades fighting for the importance of free expression, at times at great cost to myself, I have quite a lot of sympathy for what a "good faith" reading of the letter appears to want to say. But I think the letter fails to make its case on multiple grounds, even removing the question of the motives of the signatories.First, there's the question of how widespread "cancel culture" truly is. I would argue that it exists, but is vastly overstated -- and I'm saying this as someone who has had friends expelled from their jobs unfairly in my view following online mobs ganging up on them. I do believe that, as with any speech, it is possible to use it to galvanize actions I disagree with. But, as I said in my original writeup the details matter. Many of the claims of "cancel culture" remind me of the claims of "anti-conservative bias on social media." Lots of people insist it's true, but when you ask for examples, you get back a lot of platitudes about "look around!" and "it's obvious" and "you're blind if you can't see it!" but rarely many actual examples. And, in the few cases where examples are given, they frequently fall apart under scrutiny.This is true of many -- though not all -- of the examples of "cancel culture." Last fall, Cody Johnston did an amusing video arguing that cancel culture isn't a thing. I'd argue it is exaggerated, and a few points it makes are also misleading, but on the whole he's got a point. Many of the examples of "cancel culture" are really just the powerful and the privileged receiving some modicum of pushback for horrific actions or statements, that maybe pushed them down a rung from the very top of the ladder, but still left them in pretty privileged positions compared to just about everyone else:Are there more relevant examples? Perhaps. A lot of people pointed to Yascha Mounk's recent article in the Atlantic entitled Stop Firing the Innocent, and I mostly agree with that article. There are a few examples out there of people being unfairly fired in response to online mobs misinterpreting or overreacting to things. The story of David Shor in that article is certainly one that many people pointed out, and it does highlight what seems like an overreaction (Shor appears to have been fired for merely tweeting a link to a study about historical voting patterns in response to violent v. non-violent protests, and some, somewhat ridiculously, interpreted the conclusions of that study to somehow be a condemnation of some of the current protests). Another set of well known examples comes from John Ronson's book from half a decade ago, "So You've Been Publicly Shamed," which highlights a few cases of arguably unfair overreactions to minor offenses.But, here's the thing: after lots of people (including Mounk) called out what happened to Shor (more speech), many people now agree that his firing was wrong. And so, the cycle continues. Speech, counterspeech, more counterspeech, etc. Sometimes, in the midst of all that speech, bad things happen -- such as the firing of Shor. But is that an example of cancel culture run amok, or one bad result out of millions? It is very much like our debates on content moderation. Mistakes are sometimes made. It is impossible to get it right every time. But a few "bad" examples here and there are not evidence of a widespread trend.Also, I'm still hard pressed to see how the level here is any worse than it was a few decades ago. There may be different issues over which public shaming may occur, but it wasn't that long ago that people would be ostracized for suggesting it's okay to fall in love with someone of the same gender or someone of another race. On the whole, I'd argue that we've made a lot of progress in opening up avenues of discussion -- and while we should be concerned about the cases that go wrong, the evidence that there's some big change beyond what has happened in the past are lacking. Indeed, I feel like I remember this nearly identical debate from when I was a kid and the fight was over "too much political correctness," which is a form of the same thing.I think it's natural for some folks to always feel that they are being treated unfairly for their beliefs, and that people overreact. It's not a new phenomenon. It's not driven by the internet or some other new idea. Indeed, as philosopher Agnes Callard tweeted, you can go back to John Stuart Mill's "On Liberty" to find him discussing "cancel culture" as well:
Small ISPs Being Forced To Eat The Costs Of FCC's Huawei Ban
We've repeatedly noted that while Huawei certainly engages in some clearly sketchy shit (like many modern US telecom giants), the evidence supporting the Trump administration's global blacklist of the company has been lacking. Despite more than a decade of accusations and one eighteen month investigation that found nothing, the Trump administration still hasn't provided any public evidence supporting the central justification for the global blackballing effort (that Huawei works directly for the Chinese government to spy wholesale on Americans).While there's certainly some valid natsec concerns in the mix when it comes to letting an authoritarian government dominate global network builds, at least some portion of the effort appears to be protectionism driven by US network hardware makers that simply don't want to compete with cheaper Chinese gear. Some of the effort is also Trump trying to obtain leverage for his often ridiculous tariff and trade war, which at least, for some advocates, is driven more by partisan patty cake or bigotry than substantive reason.Regardless, the US effort to blackball Huawei from all global technology networks continues apace, without much concern about (1) the lack of public evidence, (2) the fact that the United States routinely does most of the stuff we're accusing China of, and (3) much of this pearl clutching has been co-opted by US companies that simply want to avoid international competition (especially in the smartphone and 5G network realm), but have had great lobbying success disguising those motivations under the guise of national security hyperventilation.There are other problems with the campaign as well. This week the FCC formally announced it would be banning companies that take taxpayer subsidies from using any Huawei or ZTE hardware in their networks. At the moment, the ban just prohibits them from buying new Chinese gear or maintaining it, but the FCC may expand eventually to forcing these ISPs to remove existing gear entirely. Smaller telecom and broadband providers were quick to note that they're not exactly thrilled:
L.A. Newspaper Sues Sheriff's Department Over Its Repeated Refusal To Comply With The Law
Very few California law enforcement agencies welcomed a new state law that finally lifted the ordained opacity that shielded misbehaving cops from the public's scrutiny. The law that went into effect at the beginning of 2019 gave California residents access to records dealing with misconduct, use-of-force, and other "bad apple" behavior for the first time in decades.State law enforcement agencies responded to the new transparency by obfuscating, stonewalling, and suing. The smartest agencies destroyed records with their cities' blessing before the public could get to them. The state's top cop even claimed the law did not affect records created before the law went into effect, directly contradicting the legislation's author. Public records requesters sued back, knowing they were in the right. After all, not a single court in the state has aligned itself with law enforcement's fervent belief it should never be accountable ever, no matter what laws are on the books.The Los Angeles Times is suing the Los Angeles Sheriff's Department for refusing to follow the new law. But who could blame the gang-infested LASD for being evasive, what with its unusually large number of reasons to keep the public in the dark about its activities?Eighteen months after making a very detailed request for information about misbehaving officers, the LA Times is asking a court to benchslap the Sheriff's Department around a bit. As additional leverage, the Times is quoting the law, which makes compliance mandatory, rather than something whose nuances should be sorted out in front of a neutral party.Before we get into the LA Times' lawsuit [PDF], let's just warm up with a statement from the LASD:
U.S. Court Of Appeals Hears Arguments That Lawsuit Against Disney For 'Pirates' Shouldn't Have Been Dismissed
Back in 2019, we wrote about a lawsuit filed against Disney by two writers that pitched a piratey movie to the company. The writers' screenplay about Davey Jones, they said, was so similar to Disney's Pirates of the Caribbean movies so as to constitute copyright infringement. Much of this appeared to stem from the fact that the two writers had pitched the screenplay to Disney a few years before the Pirates franchise began, but the similarities laid out in the lawsuit were classic idea/expression dichotomy stuff.
Sci-Hub Downloads Boost Article Citations -- And Help Academic Publishers
Techdirt readers know that Sci-Hub is a site offering free online access to a large proportion of all the scientific research papers that have been published -- at the time of writing, it claims to hold 82,605,245 of them. It's an incredible resource, used by millions around the world. Those include students whose institutions can't afford the often pricey journal subscriptions, but also many academics in well-funded universities, who do have institutional access to the papers. The latter group often prefer Sci-Hub because it provides what traditional academic publishers don't: rapid, frictionless access to the world's knowledge. Given that Sci-Hub does the job far better than most publishers, it's no wonder that the copyright industry wants to shut down the service, for example by getting related domains blocked, or encouraging the FBI to investigate Sci-Hub's founder, Alexandra Elbakyan, for alleged links to Russian intelligence.These legal battles are likely to continue for some time -- the copyright industry rarely gives up, even when its actions are ineffective or counterproductive. Academics don't care: ultimately what they want is for people to read -- and, crucially, to cite -- their work. So irrespective of the legal situation, an interesting question is: what effect do Sci-Hub downloads have on article citations? That's precisely what a new preprint, published on arXiv, seeks to answer. Here's the abstract:
Court Shoots Down AT&T, Comcast Attempt To Crush Maine Privacy Law
Over at our Tech Policy Greenhouse, former FCC official and consumer advocate Gigi Sohn just got done discussing a landmark privacy case in Maine that hasn't been getting enough attention. The short version: back in 2017, the GOP killed some pretty modest FCC broadband privacy rules at the telecom lobby's behest. Despite a lot of whining from telecom giants, those rules weren't particularly onerous -- simply requiring that ISPs be transparent about what data they're collecting and who they're selling access to, while requiring that users opt in to the sharing of more sensitive financial data.Much like net neutrality, federal lobbying by telecom giants had an unintended impact: namely once the feds showed they were too corrupt and captured to protect consumers, states began passing their own laws (some good, some bad) in order to fill the consumer protection void. On both the privacy and net neutrality fronts, giant ISPs like AT&T and Comcast cried repeatedly about how this created a "discordant and fractured framework of state protections," hoping you'd ignore this was a problem the industry itself created by relentlessly attacking even the most modest federal guidelines.Last year, Maine passed one such privacy bill modeled after the discarded FCC rules. Again the focus was largely on requiring that ISPs be transparent about what data is collected and who is buying access to it, while requiring that users opt in to the share and sale of access to more sensitive data. It also banned ISPs from charging you more money just to opt out of snoopvertising, something AT&T has already experimented with. The law was not, as telecom giants and their dollar per holler allies have claimed, particularly onerous.Comcast and AT&T sued anyway in a bid to have the law thrown out before a broader trial. In short, ISP lawyers tried to argue that giving consumers control over their own data violates ISPs' First Amendment right to market goods and services. They also claimed that by passing a privacy law that specifically targeted telecom providers, the law is based on their status as a "speaker" and should be subject to "strict scrutiny" under the First Amendment, which requires a law to be "narrowly tailored to serve a compelling state interest." As Sohn noted, while the case didn't get a lot of attention, the precedent of a telecom industry win here would be terrible for future efforts to pass any kind of intelligent, industry-specific tailored privacy protections whatsoever:
Techdirt Podcast Episode 248: The Most Serious Threat To Section 230
Attacks on Section 230 are relentless and coming from all sides — so we've got another podcast all about the attempts to ruin the most important law on the internet. This week, we're joined by Riana Pfefferkorn, the Associate Director of Surveillance and Cybersecurity at the Stanford Center for Internet and Society, to discuss what is currently the most serious threat of all: the latest incarnation of the disastrous and nonsensical EARN IT Act.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Unbridled Surveillance Will Not Save Us From COVID-19
We all share the fervent desire to reopen society, to hug our friends and loved ones, to jump start the economy, and to return to the many activities that have been off limits since COVID-19 engulfed our communities.For many, there may be a temptation to turn to invasive technologies – from temperature screening devices to contact tracing apps – that promise to stem the virus’ spread while permitting us to return to our normal routines. Many of these technologies collect the intimate details of our lives: our health status and symptoms, our associations, our locations and movements, and in some cases, even the details of our faces.Surveillance technologies are not panaceas, and without appropriate safeguards and community trust, many technologies will cause more harm than good. In fact, some surveillance tech is simply public health theater that offers a false sense of security and provides no actual protection from the coronavirus.When your employer, your gym, your local grocery store, or your local government suggests a new COVID surveillance gadget, here are some questions to ask, as well as some answers to keep in your back pocket.Does it work?Tucked into this question is another, threshold question: what does it mean to “work”? What is the goal the technology aims to achieve? What metrics will be used to measure effectiveness? What level of false positives or false negatives will be tolerated? These questions are best answered in conjunction with public health experts. In the meantime, here is what we know about some of the most popular technologies out there:Temperature ScreeningPutting aside the remarkable variability in accuracy of various temperature screening devices (pro-tip: standoff fever detectors are particularly unreliable), using elevated temperature as a proxy for COVID-19 status is both woefully under- and over-inclusive.COVID-19 is contagious before symptoms appear, and many people remain asymptomatic for the entire course of infection. Others may suppress a fever by taking Tylenol or ibuprofen. The fact that an individual lacks a fever does not mean that that individual is COVID-negative.At the same, many individuals run a fever because of conditions that have nothing to do with COVID and are not contagious such as cancer, urinary-tract infections, or simply stress. When temperature screens are used to determine who can return to work or enter a store or a dentist’s office, healthy – or at least non-contagious – individuals will be excluded from participation in society.Technology Assisted Contact Tracing AppsTechnology assisted contact tracing apps, broadly, fall into two categories: those that rely on cell phone location information, and Bluetooth proximity tracing. The former is both extremely invasive, because where you go says a lot about who you are, and likely to be ineffective for contact tracing, because the location information cell phones generate is not precise enough to determine whether two individuals are sufficiently close to risk exposure. The same is true of the location information advertisers and data brokers have been volunteering to national, state, and local governments since the pandemic began.By contrast, Bluetooth proximity tracing, if done right, can be achieved without revealing location information, associations, or even the identities of the individuals involved. (For a deep dive on Bluetooth proximity tracing, check out this whitepaper.)At the same time, even Bluetooth proximity tracing cannot determine whether two individuals within six feet of each other were, in fact, separated by a wall nor, of course, can any technology capture when COVID might move from one individual to another by temporarily resting on surfaces that are handled by multiple people.Who is being left out?Many of the people in communities that are most vulnerable to coronavirus are among the least likely to have a smartphone capable of running a contact tracing app. For example, over 40 percent of those over 65 do not own a smartphone, yet the 65-and-over population accounts for more than 75 percent of COVID-related deaths.Nearly 30 percent of those who earn less than $30,000 annually lack a smartphone; these individuals are also more likely to be frontline workers who must endure increased COVID exposure simply to make a living. Similarly, people with disabilities are 20 percent less likely to own a smartphone than the general population. Although these individuals are not more likely than others to contract the coronavirus, because of their underlying health conditions, the virus may be more dangerous for them.Even those who do own a smartphone may not have the know how to use a contact tracing app.Armed with this knowledge, some countries supplement contact tracing apps with credit card transaction histories and closed-circuit video footage. But credit card transaction records will not reach those who pay cash or the unbanked, who are disproportionately poorer and people of color.The idea of running video footage through facial recognition software to identify individuals is particularly pernicious; such systems are notoriously bad at recognizing women and Black people at a time when Black people are among those disproportionately likely to suffer from COVID-19.Between the technological flaws and the people who will be left behind by tech solutions, there is a substantial risk that relying too much on technology could lull individuals into a false sense of security and undercut more effective COVID-prevention measures. For these reasons, it is imperative that any technological intervention be coupled with well-designed analog measures, such as traditional contact tracing, robust access to testing and treatment, support for those who need to isolate at home, the availability of PPE, and social distancing.Who is being harmed?Even when traditional contact tracing techniques are used, there are myriad individuals – such as undocumented immigrants, LGBTQ youth who come from unsafe homes, people who live in apartments with more people than they have on the lease, survivors of sexual violence and domestic violence – who could be at risk if their location, associations, or health status is released.Without proper safeguards, such as those that accompany many Bluetooth proximity tracing apps, the introduction of contact tracing technologies and surveillance technologies simply ups the ante by permitting more of this information to be collected and pooled more rapidly, creating treasure troves for data thieves and law or immigration enforcement.Other technologies are equally pernicious. For example, imprecise technologies too often become excuses for racial profiling: when risk-detection systems produce ambiguous or unreliable results, their operators fill the vacuum with their own judgments.There is reason to believe devices like standoff temperature scanners will produce similar biases and misuse. And, just last week, the world learned that an all-too-predictable facial recognition mismatch led to the false arrest of a Black man, turning his life and his family’s lives upside down.Given the profound risks of harm here, it is imperative that participation in any technology-assisted COVID mitigation be voluntary, which means that important public benefits, like food stamps or housing assistance, must not be conditioned on the adoption of any particular surveillance tech nor should such tech be a condition of employment or access to public transportation or other essential services.If temperature scanners are to be used at the gateways to businesses, doctor’s offices, or public transportation, they must be the more accurate one-to-one, properly operated, clinical grade type, and anyone who is turned away must be provided with an alternate means to access the service.This is important, because individuals are in the best position to judge their own circumstances and safety needs. Moreover, public health experts frequently find that coercive health measures backfire, because a distrustful public is likely to resist participation.What legal and technological safeguards are there to mitigate harm?Perhaps the most important way to build public trust and encourage individuals to voluntarily participate in contact tracing is to build in the appropriate legal and technical safeguards.Unfortunately, the law in this area still comes up short. We have no nationwide law governing privacy in the digital age that might regulate some of these technologies. In my home state of New York, our Governor has been insisting that the Health Insurance Portability and Accountability Act (HIPAA) covers contact tracing information.But it is not clear that HIPAA applies to traditional contact tracers, and it is pretty clear that it does not apply to many of the technological COVID interventions. Moreover, the law contains numerous exceptions that permit law enforcement to access a person’s HIPAA-covered information without their consent.To fill this gap – at least for contact tracing information (the analog kind and the technological kind) – here in New York, a broad coalition that includes public defenders, health care providers, and civil rights, privacy, health care, and immigration advocates is working to pass contact tracing confidentiality legislation. The bill ensures that contact tracing information will be kept confidential, will only be used for contact tracing purposes, and will be deleted once its purpose has been served.Importantly, the bill permits the use of aggregate, de-identified information to track the spread of the virus and to identify disparities among New York communities.And, most crucially, it prevents law enforcement and immigration enforcement from acting as contact tracers or accessing contact tracing information. It also makes clear that a person’s contact tracing information cannot be used against them in a court or administrative proceeding.Law and immigration enforcement access was an obvious place to start building in privacy protections. These authorities have, time and time again, given New Yorkers, particularly Black and Brown communities – the very communities hardest hit by COVID-19 – reason for distrust. One need only look at the brutal law enforcement reaction to the ongoing protests to understand why. If individuals have any reason to believe that sharing these details of their lives will expose them or their loved ones to criminalization or deportation, they simply will not participate.The risks associated with law enforcement participation in contact tracing are not conjecture. In response to the recent protests in Minnesota, law enforcement there began using contact tracing techniques to track protesters – and public health officials immediately lamented that the police’s activities hampered their efforts to build trust and participation in contact tracing.Here in New York State, sheriffs’ departments have been deputized as contact tracers in Nassau County and Erie County. And, in New York City, when the contact tracing program had identified 5,000 cases, 85 percent had a phone number, and contact tracers reached 94 percent of those individuals, but only 1,800 shared contacts, underscoring the distrust New Yorkers feel about contact tracing.The contact tracing confidentiality legislation is a start to building in the legal safeguards that must undergird any technology-assisted coronavirus intervention. There is certainly space for additional legislation, and app and device developers also have a role to play: they should be building robust privacy protections into both their products and their terms of service.And, of course, any technological interventions must be term limited to the current pandemic. Already, some participants in the industry are endeavoring to entrench the technologies for all time. As one manufacturer wrote, “Just like 9/11 and how it impacted and changed air travel forever, this too will change the way we live and work for a long time to come.”If that sounds Orwellian, it should. It’s not hard to imagine, for example, a network of thermal cameras that were deployed during COVID-19 repurposed to conduct suspicion-less thermal body searches – perhaps to identify those suspected of drug use.Finally, members of the most impacted communities must be involved in contact tracing, as well as in developing the technologies that will be used to mitigate COVID-19. These individuals are more likely to understand and serve their communities’ needs.Just as community members have been more effective at convincing their neighbors to wear masks and adhere to social distancing, community members are more likely than outsiders to convince their neighbors to identify their contacts, to get tested, to self-quarantine when necessary, and to adopt new COVID-era tech when appropriate.***We all want to safely re-open our communities. As we contemplate which technologies to employ to help us do that, we must remember that many of these technologies offer a devil’s bargain: the illusion of safety in return for the intimate details of your life – your health status, your associations, and your location and movements.We should be careful about which technologies we choose to adopt, and we must put in place appropriate privacy protections to build community trust and ensure safety.These protections are not just privacy and civil rights necessities; they are public health imperatives.Allie Bohm is a policy counsel at the New York Civil Liberties Union, focusing on legislative and government affairs. She has deep expertise on women’s rights and privacy and technology. She also advocates on the full range of the NYCLU’s issues.
In The Middle Of A Pandemic, ICE Says Foreign Students Must Attend Physical Classes If They Don't Want To Be Kicked Out Of The Country
I guess the cruelty is the point.Months after duping a bunch of foreign students into signing up for classes at a fake college run by ICE, ICE is now informing other foreign students here legitimately that their choices for the fall 2020 semester are:
Daily Deal: The WordPress Master Class Bundle ft. Elementor & WooCommerce
The WordPress Master Class Bundle featuring Elementor and WooCommerce has 7 courses to help you master the art of using WordPress and plugins for building a variety of websites. Elementor is one of the very best and most popular plugins for building pages in WordPress, and comes with a variety of widgets and elements making it very easy to build any kind of web page without using any code. WooCommerce is a plugin to help you build your own online store. Other courses cover how to build a business website, how to build a jobs board, how to build a photo gallery, and much more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Pompeo Says US May Ban TikTok; It's Not Clear That It Can
New day, new nonsense. Secretary of State Mike Pompeo did his Pompeo thing and went on Fox News saying that the US is looking at banning apps from China in the US, with a focus on TikTok, the incredibly popular social media app that is owned by the Chinese firm ByteDance:
FCC's Assault On Low-Income Broadband Program Is Making The COVID-19 Crisis Worse
While FCC boss Ajit Pai is best known for ignoring the public and making shit up to dismantle FCC authority over telecom monopolies, his other policies have proven to be less sexy but just as terrible. From neutering plans to improve cable box competition to a wide variety of what are often senseless handouts to the industry's biggest players, most of the administration's policies are driving up costs for the rural Americans and small entrepreneurs he so breathlessly pledges fealty to.One of Pai's biggest targets has been the FCC's Lifeline program, an effort started by Reagan and expanded by Bush that long enjoyed bipartisan support until Trumpism rolled into town. Lifeline doles out a measly $9.25 per month subsidy that low-income homes can use to help pay a tiny fraction of their wireless, phone, or broadband bills (enrolled participants have to chose one). The FCC, under former FCC boss Tom Wheeler, had voted to expand the service to cover broadband connections, something Pai (ever a champion to the poor) voted down.Despite endless lip service to the "digital divide," Pai's tenure as boss has included a notable number of efforts to scuttle the Lifeline program that weren't paid much attention to -- until a pandemic came to town. COVID-19 has shone a bright spotlight on the fact that 42 million Americans still can't access broadband (double official FCC estimates), and millions more can't afford service because regulatory capture has helped protect natural monopolies and the resulting lack of competition.The FCC Lifeline is literally the bare minimum we could be doing to help make broadband more affordable for the poor. Yet despite a pandemic, new data indicates the program has shrunk (by design) notably, and now serves around one fifth of the people it could be helping thanks in part to Pai's cuts:
Federal Case Shows Cops Still Have Plenty Of Options When Dealing With Device Encryption
If no one's going to give you an encryption backdoor, maybe you just need to inconspicuously prop open the front door. That's what one cop did in this case discussed by a federal court in Minnesota. (via FourthAmendment.com)After being picked up by Task Force Officer (TFO) Adam Lepinski on suspicion of being involved in a shooting, Johnnie Haynes asked for some phone numbers off the phone Lepinski had taken from him. (A side note: TFO Lepinski was off-duty, moonlighting as security for a parking lot when he arrested Haynes. But he was still in his full uniform. This seems problematic.)Lepinski gave the phone back to Haynes who unlocked it with his thumb print. Haynes told Officer Lepinski the numbers and the officer wrote them down for him. He then gave the phone back to the officer with an indication he wished to have his phone locked again. From the order [PDF]:
More Disputes Over Trademarked Area Codes. Why Is This Allowed Again?
There are plenty of times when I have questioned why something that the USPTO granted a trademark on should be allowed to be registered at all. But one example that flummoxes me the most is that you can go out there and trademark area codes. You don't hear about this all that much, but AB InBev made this somewhat famous when it acquired Chicago's Goose Island Brewing, including the trademark for its "312" brand of beer, and proceeded to file for trademarks on allllllll kinds of area codes.Why? Why can a company lock up an identifier for a geographic region in any market designation? The answer according to some is that the USPTO has decided that area codes aren't purely geographic descriptions.
Lawsuit & Bi-Partisan Group Of Senators Seek To Push Back On Trump Administration's Attempt To Corrupt The Open Technology Fund
Last month we wrote about how the newly appointed head of the US Agency for Global Media (USAGM) had cleaned house, getting rid of the heads of the various organizations under the USAGM umbrella. That included Voice of America, Radio Free Europe/Radio Liberty, Radio Free Asia, Middle East Broadcasting... and the Open Technology Fund. The general story making the rounds is that Pack, a Steve Bannon acolyte, planned to turn the famously independent media operations into a propaganda arm for the Trump administration. Leaving side the concerns about why this is so dangerous and problematic on the media side, we focused mostly on the one "different" organization under the USAGM banner: the Open Technology Fund.OTF is incredibly important to a functioning and open internet -- especially one where freedom and privacy to communicate can work around the globe, with a focus on funding audited, open source technologies. Last week, Vice had a detailed story about what it describes as "the plot to kill the Open Technology Fund." In it, it notes that Pack wants OTF to fund two apps that are not open source, Freegate and Ultrasurf. While both claim to be about helping circumvent internet censorship, most activists don't trust those apps. Indeed, it notes that the developer behind Ultrasurf agreed to a security audit by the US government, but then threatened the company who did the audit with legal action if it made the report public:
New 'National Security' Law Threatens Hong Kong Pro-Democracy Protesters With Life In Prison
Hong Kong was handed back to China in 1997 with the understanding the Chinese government would not strip away the rights granted to Hong Kong residents prior to the handover. The Chinese government has no intention of honoring that agreement, which has prompted months of protests.The Hong Kong government has consummated its acquiescence to the Chinese government with the adoption of a harsh law that directly targets dissent and protest under the guise of securing the nation. Hong Kong residents weren't informed about the contents of the new law until after it was passed and adopted. The BBC runs down the key aspects of the new law -- none of which appear to respect the rights supposedly granted to Hong Kong residents.
For All The Hype, Trump's Favorite 'News' Channel (OAN) Faces Shrinking Footprint
The President's favorite sycophancy channel, OAN (One America News) has seen no shortage of headlines in the recent weeks for its dubious "news" programming. Said programming has included claims that elderly people are Antifa agitators, that the coronavirus was created in a North Carolina lab as part of a "deep state" plot, all while banning polls that dare to suggest that dear leader may not be doing all that hot in the wake of corruption, incompetence, and a raging pandemic. That's before you get to media allegations that the outlet has some uncomfortable parallels to Russian state TV.But the people hyperventilating over the network's Trumpist disinformation and pole position in the White House briefing room often forget to mention that the channel doesn't actually have all that many viewers. While the channel currently reaches 35 million potential households (notably fewer actually watch), that's a far cry from the 119 million TV households that have access to major news networks like CNN and Fox News. Major cable TV providers like Dish, Charter, and Comcast don't carry the channel at all, leaving Verizon and AT&T as the only major cable TV providers that think it's worth it.And even that could be changing. The company's contract with AT&T/DirecTV, first signed in 2017 and expiring in 2021, could be in trouble. Most notably because of the tough terms affixed to carriage at a time when cable providers are already being pushed to cut expensive dead weight due to cord cutting:
Researcher Buys Axon Cameras On eBay, Finds They're Still Filled With Recordings
Data isn't secure just because nothing happened to it when it was still in your possession. It can still "leak" long after the storage device has gone onto its second life in someone else's hands.The Fort Huachuca Military Police were just apprised of this truism by Twitter user KF, who had purchased some used Axon body cameras on eBay. The cameras still contained their microSD storage cards. And contained on those storage cards were a bunch of recordings (including audio) that hadn't been wiped by the MPs before the cameras ended up on eBay.
Daily Deal: Master the Science of Memory, Leadership & Focus Bundle
The Master the Science of Memory, Leadership and Focus Bundle has 9 courses to help you better manage yourself and others. Courses cover how to gain substantial improvement in your capacity to focus, how to overcome procrastination, how to spot media manipulation, how to deals with stress, and more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
New EARN IT Act Creates An Insane New Dilemma: Either Encrypt All Or Spy On All
Last week, as predicted, the Senate Judiciary Committee voted unanimously to replace the original EARN IT Act, with a new one. As part of the markup, they also voted to approve Senator Patrick Leahy's amendment which some might read to say that EARN IT cannot be used to block encryption -- but the reality is a lot more complicated. As I'll explain, this new bill is terrible in a different way than the old bill: it will create a new dilemma in which internet services will either feel compelled to encrypt everything or in which the only way you'll be able to use any internet service is if you hand over a ton of personal information to the service provider -- potentially putting your privacy at extreme risk.First lets acknowledge an oddity about this new bill. Both bills involve the creation of a commission to come up with "best practices" in trying to stop "child sexual abuse material" or CSAM (the concept formerly known as child porn). In the old bill, if sites didn't follow the commission's best practices, they could lose their Section 230 protections. This resulted in fears that the commission would outlaw encryption as a "best practice." The new bill retains the commission, but for no recognizable purpose. Instead, it does away with the pretense and just says that a bunch of sites should lose Section 230 protections no matter what. It seems quite odd to first say "we need a commission to determine best practices" and then on a second pass say that before the commission has done anything we're just going to make massive changes to Section 230 based on... nothing at all. No evidence saying that this would create better outcomes. No evidence that Section 230 is a problem with regards to CSAM. Just... nothing.Specifically, the new bill makes a change to Section 230 that looks similar to the change that was made with FOSTA, saying that you don't get 230 protections if you advertise, promote, present, distribute, or solicit CSAM. But here's the thing: CSAM is already a federal crime and all federal crimes are already exempted from Section 230. On top of that, it's not as if there are a bunch of cases anyone can trot out as examples of Section 230 getting in the way of CSAM prosecutions. There's literally no evidence that this is needed or will help -- because it won't.As we've detailed before, the real scandal in all of this is not that internet companies are facilitating CSAM, but that the DOJ has literally ignored its Congressional mandate to go after those engaged in CSAM production and distribution. Congress tasked the DOJ with tackling CSAM and the DOJ has just not done it. The DOJ was required to compile data and set goals to eliminate CSAM... and has just not done it. That's why it's bizarre that EARN IT is getting all of the attention rather than an alternative bill from Senators Wyden, Gillibrand, Casey and Brown that would tell the DOJ to actually get serious about doing its job with regards to CSAM, rather than blaming everyone else.But digging into the details, the real problem here is that, as structured, the new EARN IT Act would be a disaster in trying to achieve the goals the sponsors have set out for it. First off, thanks to the addition of Senator Leahy's Amendment, some may see the bill as one that effectively requires encryption to avoid liability for CSAM. Even that's not totally clear, however. While you can read Leahy's amendment to say that encryption is protected, the actual structure of the final bill punts many issues to state law, and that means having to comply with 50 different state laws. Some, like Illinois, have lower standards for the mens rea regarding CSAM, and the worry is that we won't know whether or not offering end-to-end encryption would be seen as violating state laws until long and costly cases go through their lengthy process.Either way, this weird CSAM carveout from Section 230 is somewhat equivalent to the moderator's dilemma that other attempts to change Section 230 create. Because most of those other reforms put in place a "knowledge" standard, it gives many sites a reason to never look at the content on their platform. In this case, due to the explicit call out saying that encryption isn't impacted, that would effectively say that if you want to keep 230 protections, you should encrypt absolutely everything. Which, ironically, is the exact opposite of what Attorney General Bill Barr has been asking for.But, as with the moderator's dilemma, there's also a flipside (if you don't want to ignore everything, then you have to greatly restrict what you allow through). Under the new EARN IT, the flipside is that the government more or less says that you are now responsible for being able to track and identify anyone on your service who is not using encryption -- meaning you would need to carefully verify every user of your platform. No more simple signups. No more anonymity. And, incredibly, this would mean that sites would need to collect a ton of data on every user. Want to use this new service? First submit your phone number, driver's license, etc.At a time when people are saying they trust big internet companies less and less with their data, why would Senators Graham, Blumenthal, Feinstein, and Hawley (HAWLEY!?!?) be encouraging websites to collect even more (and more intrusive) data on all their users?Since this is somewhat different than the traditional moderator's dilemma, it might be called the "censor's dilemma" or possibly the "middleman's dilemma," in that this is even more tied to the government's demand that websites block certain content entirely, which puts them in the role of a government middleman or censor (which, not coincidentally, would raise serious constitutional issues with the EARN IT Act turning private entities into state censors).Either way it is difficult to see how these two outcomes are what Congress (or, for that matter, the DOJ) actually wants:
Chinese 5G Plans Start At $10, Showing The 'Race to 5G' Isn't Much Of One
We've noted for a while that the "race to 5G" is largely just the byproduct of telecom lobbyists hoping to spike lagging smartphone and network hardware sales. Yes, 5G is important in that it will provide faster, more resilient networks when it's finally deployed at scale years from now. But the society-altering impacts of the technology are extremely over-hyped, international efforts to deploy the faster wireless standard aren't really a race, and even if it were, our broadband maps are so terrible (by design) it would be impossible to actually determine who won.A big component of the "race to 5G" includes the idea that we must "beat" China. So far, the general consensus is that the only way to "defeat China" is to mindlessly pander to U.S. telecom giants in the form of merger approvals, tax breaks, subsidies, and other perks. But these favors not only don't result in better or more broadly available networks as promised, they only cement consolidation, limited competition, higher prices, and generally poor customer service. The other part of "beating" China involves blacklisting Chinese gear makers like Huawei for spying on Americans, then refusing to share public evidence of doing so.A lack of mid-band spectrum here in the States has resulted in measurably slower 5G networks than we're seeing in other countries, including China. And while US regulators focus largely on kissing entrenched providers asses via dubious, unpopular policy decisions (killing all telecom consumer protections, rubber stamping the Sprint, T-Mobile merger), China's State-owned carriers China Mobile, China Unicom and China Telecom have taken a wide deployment lead.How much of a lead is largely impossible to given the unreliability of both US and Chinese data. Transparency isn't traditionally a priority for state-owned telecom agencies. While here in the US, ISPs have spent years lobbying against better maps, since better maps would highlight the industry's deployment and competition shortcomings. And while we have made some small progress toward better mapping, US wireless carriers, which have spent the last few years lying about where 5G is available, are already lobbying to exclude 5G networks from these improvements.Comparisons on pricing are a little easier, though, there too, that's a race we're pretty clearly not winning anytime soon.Here in the States, consumers already pay some of the highest prices in the developed world for 4G mobile data. So far, 5G looks to be even more expensive, with carriers like Verizon not only charging users $10 more for 5G, but banning HD and 4K video unless consumers pony up even more money.While numerous aspects of China's state-owned telecom industry are ugly (surveillance, censorship) and unworthy of emulation, growing competition among MVNOs (mobile virtual network operators), only established in 2013, has driven down 5G prices to the point where users can nab a 5G connection starting around $10 per month:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner on the insightful — also racking up quite a lot of funny votes — is Nick-B referring Trump supporters to another recent post:
This Week In Techdirt History: June 28th - July 4th
Five Years AgoThis week in 2015, a missing document from the FISA court docket suggested that there was yet another undisclosed bulk records collection program hiding somewhere, while newly-released Wikileaks documents revealed that, despite its denials, the NSA was engaged in economic espionage, and a fresh FISA order authorised "as-is" phone recrod collections for the next six months. Just like today, the FBI was on an anti-encryption streak, fearmongering about "going dark" despite actual wiretaps almost never running into encryption. And the MPAA was launching another ad campaign against piracy... targeted at paying customers, for some reason.Ten Years AgoThis week in 2010, we looked at the list of ten questions for ACTA negotiators that were being taken to a meeting in Sweden, and unsurprisingly got more of the same old stuff for answers. We looked at an economic analysis of the Viacom/YouTube decision, and then at the new important ruling of the week: the Supreme Court's narrow take on Bilski, which let business method and software patents survive while leaving the door open for future cases that might change things — all of which required a bit of tea leaf reading to determine what the court was truly thinking about software patents.Fifteen Years AgoThis week in 2005, the Supreme Court issued its expected rulings in both the Grokster and BRand X cases, with a mixed bag of results — while former RIAA boss Hilary Rosen suddenly realized this kind of fight was probably harming the RIAA's future. A Taiwanese court ruled that file sharing software is perfectly legal, while Sweden's terrible file sharing law went into effect. Meanwhile, AMD resurrected its antitrust attack on Intel, and took out a bunch of ads to make its case to the public, though we wondered if the public would actually care.
Research Libraries Tell Publishers To Drop Their Awful Lawsuit Against The Internet Archive
I've seen a lot of people -- including those who are supporting the publishers' legal attack on the Internet Archive -- insist that they "support libraries," but that the Internet Archive's Open Library and National Emergency Library are "not libraries." First off, they're wrong. But, more importantly, it's good to see actual librarians now coming out in support of the Internet Archive as well. The Association of Research Libraries has put out a statement asking publishers to drop this counter productive lawsuit, especially since the Internet Archive has shut down the National Emergency Library.
That Was Quick: Appellate Court Says Simon & Schuster Not Subject To Prior Restraint Order Over Mary Trump's Book; But Fight's Not Over Yet
Yesterday we wrote about how Charles Harder, representing the President's brother, was able to get a highly questionable temporary restraining order (TRO) against Mary Trump and Simon & Schuster not to publish Mary Trump's book "Too Much and Never Enough, How My Family Created the World’s Most Dangerous Man." We noted that the prior restraint seemed unlikely to survive appellate scrutiny, and within a few hours it was already greatly limited. NY Appellate Court judge Alan Scheinkman wrote a much more thorough opinion than the (lower and misleadingly named) Supreme Court judge's ruling on the TRO.In it, he says that the TRO should be lifted from Simon & Schuster as a non-party to the confidentiality agreement signed between Mary Trump and others in her family. However, that does not necessarily mean the publication will go ahead. A somewhat modified order remains in place against Mary Trump, with the recognition that the more thorough hearing about the order will take place prior to the book's planned release anyway, which the judge seems to feel means that the order is not yet restricting any speech.
Boston The Latest City To Ban Facial Recognition Use By Government Agencies
San Francisco led the way. Then the entire state of California followed suit. And on the other side of the country, a few smaller cities in Massachusetts did the same thing: banned facial recognition.It just makes sense. The tech that's out there is as dangerous as it is unproven. Mostly known for its false positive rates, facial recognition software has shown it's capable of amplifying existing biases into actionable "intel" with the power to severely disrupt people's lives.It's not just hypothetical. Just recently, a Michigan man became the first false positive arrested. He was detained for 30 hours based on a mismatch delivered by facial recognition software. Even companies that have been pitching facial recognition tech to law enforcement agencies have pulled back in recent weeks, refusing to become part of the problem… at least for the time being. One company that has done nothing but sell tech to cops has decided it won't be adding facial recognition to its near-ubiquitous body cameras.With all of this going on, news of another facial recognition ban in a major city is no longer surprising. But it's still welcome news.
Charter Spectrum Lobbies FCC To Kill Time Warner Cable Merger Conditions
When Charter proposed its $79 billion acquisition of Time Warner Cable and Bright House Networks, former FCC boss Tom Wheeler brought in net neutrality advocate Marvin Ammori to help hammer out conditions that wound up actually being semi-meaningful, a rarity in the telecom space. Under the deal, Charter was banned from imposing usage caps, engaging in interconnection shenanigans with content providers like Netflix, or violating net neutrality (even if the rules themselves were killed) for a period of seven years. Charter was also required to expand broadband to 2 million additional locations.Granted a lot has happened since those conditions were passed in 2016. That includes the FCC basically folding like wet cardboard under pressure from telecom lobbyists, and not only killing all meaningful net neutrality rules, but gutting its authority over telecom creating massive gaps in basic consumer protections. Obviously feeling unfairly excluded from all the corruption, Charter is now lobbying the FCC to eliminate most of the deal's conditions, claiming that because the streaming video market is just so damn competitive, the conditions have proven themselves unnecessary:
Facebook Follows Twitter In Recognizing A 'More Speech' Approach Is Best For Newsworthy Liars
As you may recall, a few weeks back, Twitter made a decision to add a fact check to some tweets by President Trump, and a few days later, to put a label on some of his tweets, saying that they violated Twitter's policies, and would normally be deleted, but Twitter decided that given the newsworthiness of the speaker, they would be left up (though without the ability to comment or retweet them). The president reacted about as well as expected, meaning he whined vociferously, and eventually issued a silly executive order.Of course, the other end of this story was that Trump posted some of the same content to Facebook, and Facebook chose to do nothing. Indeed, Mark Zuckerberg pulled out this ridiculous self-serving, sanctimonious nonsense about how Facebook would allow that content because he didn't want to be "the arbiter of truth." Except, of course, Facebook does fact checks and content moderation all the time. This seemed to be a lot more about currying favor with the president, than any principled stand.It created a big fuss within (and outside) the company, and as with any situation in which a social media website says it's taking a hands-off approach, it eventually proves to be totally unworkable. It seems to have taken all of a month for Facebook to recognize this as well.On Friday, Mark Zuckerberg announced a bunch of changes to Facebook's policies that appear to be pretty damn similar to what Twitter did a month earlier, which Zuckerberg originally pretended was a bad idea. Amidst a larger rollout of changes to fight voter suppression and misinformation, there was this:
Daily Deal: The CompTIA Secure Cloud Professional Bundle
The CompTIA Secure Cloud Professional Bundle has 2 courses to help you prepare for the CompTIA Cloud+ (CV0-002) and CompTIA Security+ SY0-501 exams. The CompTIA Cloud+ covers the increased diversity of knowledge, skills and abilities required of system administrators to validate what is necessary to perform effectively in data center jobs. CompTIA Security+ covers the essential principles for network security and risk management. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Rather Than Attacking Section 230, Why Aren't Trump Supporters Angry About The DMCA That's Actually Causing Issues?
A few weeks back, we wrote about how one of Donald Trump's tirades over Twitter "moderating" him, in which he blamed Section 230, was totally misplaced. The actual issue was about copyright and Section 512 of the DMCA. That was a case where a copyright claim took down a Trump campaign video after a copyright holder claimed it infringed.Last week, we saw copyright again cause trouble in Trump world -- and again, Trump's fans blamed Twitter and Section 230 rather than the problems of the DMCA instead. This time, it involved a well-known Trump mememaker going by the name Carpe Donktum, who makes generally lame "MAGA memes." Early last week, Twitter permanently shut down his account, and all the Trumpalos went nuts. A writer for the Federalist, Mollie Hemingway, laughably called it "election interference" by Twitter:Except, as you can even see in that very screenshot that Mollie (who apparently can't even read the screenshots she's posting), Twitter shut down his account for repeated infringement under the DMCA. Twitter later confirmed exactly that.
YouTube Jacks Live TV Streaming Prices 30%, As Streaming Sector Starts To Resemble Good Old Cable
There's absolutely no doubt that the streaming TV revolution has, by and large, been a positive thing. Thanks to a ridiculous surge in streaming TV competitors, consumers now have far more options than they've ever had before, resulting not only in lower prices and more flexibility in TV options, but customer service that far surpasses the clumsy trash fire that is Comcast customer service.But all is not well in paradise. Those laboring under the illusion that this competition would magically rid the sector of its worst impulses will likely soon be broken of this notion, as YouTube this week announced it would be raising prices for its streaming TV service some thirty percent ($15 to $65 a month). Much like traditional cable TV vendors have done for years, YouTube blames the hikes on the high cost of programming, and innovative improvements to the platform:
Detroit Police Chief Says Facial Recognition Software Involved In Bogus Arrest Is Wrong '96 Percent Of The Time'
The law enforcement agency involved with the first reported false arrest linked to facial recognition software is talking about its software. The Detroit Police Department -- acting on a facial recognition "match" handed to it by State Police investigators -- arrested resident Robert Williams for allegedly shoplifting watches from an upscale boutique.Williams did not commit this robbery. He even had an alibi. But the investigators weren't interested in his answers or his questions. They had a lo-res screen grab from the store's CCTV camera -- one that software provided by DataWorks Plus said matched Williams' drivers license photo. Thirty hours later, Williams was cut loose by investigators, one of which said "I guess the computer screwed up" after rewatching the camera footage with Williams present.The officers ignored the bold letters on top of the "match" delivered by the software. The writing said "This document is not a positive identification." It also said the non-match was not "probable cause for arrest." Unfortunately for the misidentified Michigan resident, the cops who arrested him treated the printout as both: positive identification and probable cause.The policies governing law enforcement's use of this tech have changed since Williams' arrest in January. Under the current policy, lo-res images like the one that led to this arrest are no longer allowed to be submitted to the facial recognition system. That fixes a very small part of the problem. The larger problem is that the tech is mostly good at being bad. This isn't a complaint from critics. This comes directly from the top of the DPD.
How An NYPD Officer Can Hit A Teen With His Car In Front Of Several Witnesses And Get Away With It
The NYPD has made internal discipline procedures a loop so closed that even its "independent" oversight -- the Civilian Complaint Review Board -- can't get in the door. The NYPD is effectively its own oversight. Decisions made by the CCRB can be overridden by the Police Commissioner. Even if the Commissioner agrees with the findings, recommended punishments can be departed from or ignored completely.This story of police misconduct springs from this accountability void. ProPublica journalist Eric Umansky lives in New York City. Last year, while trick-or-treating with their daughter, his wife witnessed an unmarked police car hit a black teen. This happened in full view of several witnesses, including a person who worked at a business near the scene of the accident.
NY Judge Apparently Unaware Of The Supreme Court's Ban On Prior Restraint: Puts Temporary Restraining Order On Trump's Niece's Book
Last week, we wrote about the president's brother, Robert Trump, suing his (and the president's) niece, Mary Trump to try to block her from publishing her new book that criticizes the president. The initial filing to block the publication failed for being in the wrong court, but the follow up attempt has succeeded, at least temporarily. NY Supreme Court (despite the name, this is the equivalent of the district court in NY) Judge Hal Greenwald doesn't seem to have even bothered to do even a cursory 1st Amendment analysis regarding prior restraint, but agreed to rush out a temporary restraining order, while ordering the the parties to brief the matter before July 10th on whether or not the ban should be made permanent.This is not how this works. As Walter Sobchek famously explained: "the Supreme Court has roundly rejected prior restraint." Or, as 1st Amendment lawyer Ken "Popehat" White notes:
Senate Waters Down EARN IT At The Last Minute; Gives Civil Liberties Groups No Time To Point Out The Many Remaining Problems
As expected, the EARN IT Act is set to be marked up this week, and today (a day before the markup) Senators Graham and Blumenthal announced a "manager's amendment" that basically rewrites the entire bill. It has some resemblance to the original bill, in that this bill will also create a giant "national commission on online child sexual exploitation prevention" to "develop recommended best practices" that various websites can use to "prevent, reduce, and respond to the online sexual exploitation of children," but then has removed the whole "earn it" part of the "EARN IT" Act in that there seems to be no legal consequences for any site not following these "best practices" (yet). In the original bill, not following the best practices would lose sites their Section 230 protections. Now... not following them is just... not following them. The Commission just gets to shout into the wind.Of course, we've seen mission creep on things like this before, where "best practices" later get encoded into law, so there remain significant concerns about how this all plays out in the long run, even if they've removed some of the bite from this version.Instead, the major "change" with this version of EARN IT, is that it basically replicates FOSTA in creating a specific "carve out" for child sexual abuse material (CSAM, or the artist formerly known as "child porn"). It's almost an exact replica of FOSTA, except instead of "sex trafficking and prostitution" they say the same thing about 230 not impacting laws regarding CSAM. This is... weird? And pointless? It's not like there is some long list of cases regarding CSAM where Section 230 got in the way. There are no sites anyone can point to as "hiding behind Section 230" in order to encourage such content. This is all... performative. And, if anything, we're already seeing people realize that FOSTA did nothing to stop sex trafficking, but did have massive unintended consequences.That said, there are still massive problems with this bill, and that includes significant constitutional concerns. First off, it remains unclear why the government needs to set up this commission. The companies have spent years working with various stakeholders to build out a set of voluntary best practices that have been implemented and have been effective in finding and stopping a huge amount of CSAM. Of course, there remains a lot more out there, and users get ever sneakier in trying to produce and share such content -- but a big part of the problem seems to be that the government is so focused on blaming tech platforms for CSAM that they do little to nothing to stop the people who are actually creating and sharing the material. That's why Senator Wyden tried to call law enforcement's bluff over all of this by putting out a competing bill that basically pushes law enforcement to do its job, which it has mostly been ignoring.On the encryption front: much of the early concern was that this commission (with Attorney General Bill Barr's hand heavily leaning on the scales) would say that offering end-to-end encryption was not a "best practice" and thus could lead to sites that offered such communication tools losing 230 protections for other parts of their site. This version of EARN IT removes that specific concern... but it's still a threat to encryption, though in a roundabout way. Specifically, in that FOSTA-like carve out, the bill would allow states to enforce federal criminal laws regarding CSAM, and would allow states to set their own laws for what standard counts as the standard necessary to show that a site "knowingly" aided in the "advertisement, promotion, presentation, distribution or solicitation" of CSAM.And... you could certainly see some states move (perhaps with a nudge from Bill Barr or some other law enforcement) to say that offering end-to-end encryption trips the knowledge standard on something like "distribution." It's roundabout, but it remains a threat to encryption.Then there are the constitutional concerns. A bunch of people had raised significant 4th Amendment concerns in that if the government was determining the standards for fighting CSAM, that would turn the platforms into "state actors" for the purpose of fighting CSAM -- meaning that 4th Amendment standards would apply to what the companies themselves could do to hunt down and stop those passing around CSAM. That would make it significantly harder to actually track down the stuff. With the rewritten bill, this again is not as clear, and there remain concerns about the interaction with state law. Under this law, a site can be held liable for CSAM if it was "reckless" and there are reasons to believe that state laws might suggest that it's reckless not to do monitoring for CSAM -- which could put us right back into that state actor 4th Amendment issue.These are not all of the problems with the bill, but frankly, the new version is just... weird? It's like they had that original "earn" 230 idea worked out, and were convinced that couldn't actually work, but were too wedded to the general idea to try to craft a law that actually works. So they just kinda chucked it all and said "recreate FOSTA" despite that not making any sense.Oh, and they spring this on everybody the day before they mark it up, giving most experts almost no time to review and analyze. This is not how good lawmaking is done. But what do you expect these days?
Brazil's Proposed 'Fake News' Law Says Internet Users Are Guilty Until Proven Innocent, Demands Constant Logging From ISPs
Brazil's legislature is set to vote on its proposed "fake news" law. This law would criminalize speech the government doesn't like, under the handy theory that anything it doesn't like must be "fake." There was some mobilization on this not-even-legal-yet theory back in 2018, ahead of an election, when the Federal Police announced it would be keeping an eye on the internet during the election process. There are plenty of ways to combat misinformation. Giving this job to people with guns is the worst solution.The EFF has put together a summary of the worst aspects of the proposed law. And they are the worst. First and foremost, lawmakers have realized a law that targets users the government can't identify is completely worthless. Brazilians will pretty much need a license to communicate with others -- something achieved by turning platforms and app makers into bouncers at the internet nightclub.
Parler Speedruns The Content Moderation Learning Curve; Goes From 'We Allow Everything' To 'We're The Good Censors' In Days
Over the last few weeks Parler has become the talk of Trumpist land, with promises of a social media site that "supports free speech." The front page of the site insists that its content moderation is based on the standards of the FCC and the Supreme Court of the United States:Of course, that's nonsensical. The FCC's regulations on speech do not apply to the internet, but just to broadcast television and radio over public spectrum. And, of course, the Supreme Court's well-established parameters for 1st Amendment protected speech have been laid out pretty directly over the last century or so, but the way this is written they make it sound like any content to be moderated on Parler will first be reviewed by the Supreme Court, and that's not how any of this works. Indeed, under Supreme Court precedent, very little speech is outside of the 1st Amendment these days, and we pointed out that Parler's terms of service did not reflect much understanding of the nuances of Supreme Court jurisprudence on the 1st Amendment. Rather, it appeared to demonstrate the level of knowledge of a 20-something tech bro skimming a Wikipedia article about exceptions to the 1st Amendment and just grabbing the section headings without bothering to read the details (or talk to a 1st Amendment lawyer).Besides, as we pointed out, Parler's terms of service allow them to ban users or content for any reason whatsoever -- suggesting they didn't have much conviction behind their "we only moderate based on the FCC and the Supreme Court." Elsewhere, Parler's CEO says that "if you can say it on the street of New York, you can say it on Parler." Or this nugget of nonsense:
Daily Deal: The Ultimate Artificial Intelligence Scientist Bundle
The Ultimate Artificial Intelligence Scientist Bundle consists of four courses covering Python, Tensorflow, Machine and Deep Learning. You will learn about complex theories, algorithms, coding libraries, Artificial Neural Networks, and Self-Organizing Maps. You'll also learn about the core principles of programming, data validation, automatic dataset preprocessing, and more. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
'But Without 230 Reform, Websites Have No Incentive To Change!' They Scream Into The Void As Every Large Company Pulls Ads From Facebook
One of the most frustrating lines that we hear from people criticizing internet website content moderation is the idea that thanks to Section 230 of the Communications Decency Act, websites have no incentive to do any moderation. This is a myth that I consider to be the flip side of the claims by aggrieved conservatives insisting that Section 230 requires "no bias" in moderation decisions. The "no incentive" people are (often lawyers) complaining about too little moderation. For reasons I cannot comprehend, they seem to think that the only motivation for doing anything is if the law requires you to do it. We've tried to debunk this notion multiple times, and yet it comes up again and again. Just a couple weeks ago in a panel about Section 230, a former top Hollywood lobbyist trotted it out.I've been thinking about that line a bunch over the past few days as a huge number of large companies began pulling ads from Facebook as part of a "Stop Hate for Profit" campaign put together by a bunch of non-profits.Over 200 companies have said they've joined the campaign and pulled their Facebook ads, including some big names, like Unilever, Verizon, Hershey, The North Face, Clorox, Starbucks, Reebok, Pfizer, Microsoft, Levi's, HP, Honda, Ford, Coca Cola and many, many more. Now, the cynical take on this is that with the current economic conditions and a global pandemic, many were looking to pull back on advertising anyway, and joining this campaign was a way to do so and get a bit of an earned media boost at the same time.But many of the companies are putting out statements demanding that Facebook change its practices before they'll bring back ads. Here's an open letter from Levi's:
...200201202203204205206207208209...