Americans today are faced with a dilemma – there is a vast universe of products to let us control everything in our lives with a voice command or touch of a button. We can unlock our doors, turn on the heat, track our exercise routines and our baby monitors and perform a million other tasks in ways that make life easier or more efficient.But these conveniences carry with them the danger that the data generated will be used against us.Far too often, information that a government or company can collect and retain, is being collected and retained, and then shared or sold with other companies, marketers or agencies in ways that Americans never consider when they decide to buy a new thermostat. When the government or private corporations can tap into the stacks of information, these smart devices that make our lives easier also amount to spies working against our interests.There is no good reason that Americans should have to compromise on privacy to benefit from the digital age. Consumers want smart devices, but we also want companies and the government to mind their own business when it comes to our personal information.Over the past decade, I’ve made protecting Americans’ privacy against unnecessary government surveillance one of my top priorities. And following the Cambridge Analytica scandal, I’ve spent a lot of time thinking about how to create a commonsense plan to secure our privacy from corporations that haven’t been good stewards of private information.That’s why I wrote a draft privacy bill, and, after a year of soliciting feedback from experts, introduced the Mind Your Own Business Act last fall.It’s based on three core principles: First, corporations should be required to provide full transparency, in easy-to-understand language, about how they collect, use and share their customers’ data — and they should be held to those commitments. There should never be another scandal like we saw with wireless carriers, when phone companies shared real-time location data with bounty hunters, scammers and creepy exes without their customers’ knowledge.Second, users need far more control over how their data is shared. The Mind Your Own Business Act would put teeth back into the Do Not Track option that has become essentially useless today. Under my bill there would be a single website where consumers could click a button to say no company could share your information with a third party without your express permission.Under my bill consumers can choose whether to allow sharing data with third parties and targeted ads, and companies would have to offer tracking-free versions of their products that don’t cost more than the average revenue generated from a user’s data. And it makes sure low-income families can get free privacy protection, so privacy isn’t a luxury good.Third, there need to be real consequences for corporations that break the rules. My bill follows the European privacy law and California’s Consumer Privacy Act to add fines up to 4 percent of annual revenue and even the possibility of jail time for executives who lie to the Federal Trade Commission (FTC) about protecting users’ privacy.Those are some key points, but my plan does a lot more as well. Because privacy is also about making sure companies protect the data they have, my bill directs the FTC to set baseline privacy and cybersecurity standards and beefs up the number of people and resources the agency enforce those rules. It requires companies to assess their algorithms to detect whether they result in biased results and to fix problems they find.My bill will create a healthier internet economy in two separate ways: First, consumers can directly choose to pay for ironclad privacy, instead of data-scooping free services. But even users who don’t opt out will see major improvements in privacy from the baseline rules and new transparency requirements. Companies often have no choice but to terminate their shady deals with third party data dealers, once they become public. With my bill, companies will be forced to disclose exactly who sees your data, and they will face steep penalties for lying about it.Americans are sick of being faced with a feeling of vague unease after clicking through pages of fine print. Congress needs to step up, add guardrails for our privacy and stop the endless series of Sophie’s choices between technological advances and personal privacy. We must also reform the legal treatment of “business records” so that information created to make technology work better for you and your family is treated like private, personal effects, not subject to government prying without a warrant.It's time to level the playing field between consumers and the corporations who profit from our data, and force companies to finally take Americans’ privacy seriously.
While US journalism is certainly in crisis mode, it's particularly bad on the local level, where most local newspapers and broadcasters have been either killed off or consolidated into large corporations, often resulting in something that's less news, and more homogenized dreck (see: that Deadspin Sinclair video from a few years back). Data suggests this shift has a profoundly negative impact on the culture, resulting in fewer investigations of corruption, a more divided and less informed populace, and even swayed political outcomes as nuanced local coverage is replaced with more partisan, national news.The latest case in point: as Amazon has faced questions about warehouse worker safety during the pandemic, the company has been pushing local news outlets to carry a gushing piece of fluff PR loosely disguised as journalism. More than 11 local broadcasters agreed to do so, and the result is... well, see for yourself:
The German government pretended to be bothered by the NSA's spying when the Snowden leaks began, claiming surveillance of overseas allies was somehow a bit too much. It had nothing to say about its own spying, which was roughly aligned with the NSA's "collect it all" attitude. This could be chalked up to "Five Eyes" envy, perhaps. The NSA works with four other countries to hoover up massive amounts of data directly from internet fire hoses located around the world, but Germany has never made the cut.While the German PM made a lot of noise about being surveilled, Germany's intelligence agencies continued to perform both domestic and foreign surveillance, resulting in legal challenges to the country's surveillance programs. The German Constitution restricts domestic surveillance but doesn't have nearly as much to say about subjecting foreigners to intrusive snooping. Foreigners are usually considered fair game -- non-recipients of protections given to citizens of whatever country does the spying.One legal challenge dead-ended when a German court decided a service provider couldn't sue on behalf of its spied-upon users. But others continued, and there's good news to report.
With the explosion of the video game industry and the technology that has come along with it, it's starting to get really fun to see what creative minds can do inside of the gaming realm. It's turning games into something much more than they would have been 20 years ago. Back then, games were singular in purpose: play the video game. Today they can be so much more when done right. They can be a social ecosystem. They can be economies onto themselves.Or they can be a place to premier top tier movie trailers, in the case of Fortnite.
President Trump is not happy with Twitter. But a lot of other people were already unhappy with Twitter. As his tweets have grown more abusive by the day, and the non-insane public has naturally grown more outraged by them, there has been an increase in calls for Twitter to delete his tweets, if not his account outright. But what's worse is the increase in calls that sound just like what Trump now demands: that Section 230 must be changed if Twitter is unwilling to take those steps. Both are bad ideas, however, for separate, although related, reasons.The basic problem is that there is no easy answer for what to do with Trump's tweets, also for many reasons. One fundamental reason is that content moderation is essentially an impossible task. As we've discussed many, many times before, it is extremely difficult for any platform to establish an editorial policy that will accurately catch 100% of the posts that everyone agrees are awful and no posts that are fine. And part of the reason for that difficulty is that there is no editorial policy that everyone will ever be able to agree on. It's unlikely that one could be drawn up that even most people would agree on, yet platforms regularly attempt to give it their best shot anyway. But even then, with some sort of policy in place, it is still extremely difficult, if not impossible, to quickly and accurately ascertain whether any particular social media post amidst the enormous deluge of social media posts being made every minute, truly runs afoul of it. As we have said umpteen times, content moderation at scale is hard. Plenty is likely to go wrong for even the most well-intentioned and well-resourced platform.Furthermore, Trump is no ordinary tweeter whose tweets may run afoul of Twitter's moderation policies. Trump happens to be the President of the United States, which is a fact that is going to strain any content moderation policy primarily set up to deal with the tweets by people who are not the President of the United States. It is possible, of course, to decide to treat him like any other tweeter, and many have called for Twitter to do exactly that. But it's not clear that doing so would be a good idea. For better or for worse, his tweets are the tweets of the American Head of State and inherently newsworthy. While one could argue that they should be suppressed because their impact is so prone to being so destructive, it would not be a costless decision. While having the President of the United States tweeting awful things does cause harm, not knowing that the President of the United States is trying to tweet awful things presents its own harm. This is the person we have occupying the highest political office in the land. It would not do the voting public much good if they could not know who he is and what he is trying to do.The arguments for suppressing his tweets largely are based on the idea that taking away his power to tweet would take away his power to do harm. But the problem is that his power comes from his office, not from Twitter. Taking Twitter away from him doesn't ultimately defang him. It just defangs the public's ability to know what is being done by him in their name.Twitter's recent decision to add contextualization to his tweets might present a middle ground, although it is unlikely to be a panacea. It puts Twitter in the position of having to make more explicit editorial decisions, which, as discussed above, is an exercise that is difficult to do in a way that will satisfy everyone. It also may not be sustainable: how many tweets will need this treatment? And how many public officials will similarly require it? Still, it certainly seems like a reasonable tack for Twitter to try – one that tries to mitigate the costs of Trump's unfettered tweeting without inflicting the costs that would result from their suppression.Which leads to why Section 230 is so important, and why it is a bad idea to call for changing it in response to Trump. Because Section 230 is what gives Twitter the freedom to try to figure out the best way to handle the situation. There are no easy answers, just best guesses, but were it not for Section 230 Twitter would not be able to give it the best shot it can to get it right. Instead it would be pressured to take certain actions, regardless of whether those actions were remotely in the public interest. Without Section 230 platforms like Twitter will only be able to make decisions in their own interest, and that won't help them try to meet the public call to do more.Changing Section 230 also won't solve anything, because the problem isn't with Twitter at all. The problem is that the President of the United States is of such poisoned character that he uses his time in office to spread corrosive garbage. The problem is that the President of the United States is using his power to menace citizens. The problem is that the President of the United States is using his role as the chief executive of the country to dissolve confidence in our laws and democratic norms.The problem is that the President of the United States is doing all these things, and would be doing all these things, regardless of whether he was on Twitter. But what would change if there were no Twitter is our ability to know that this is what he is doing. It is no idle slogan to say that democracy dies in the darkness; it is an essential truth. And it's why we need to hold fast to our laws that enable the transparency we need to be able to know when our leaders are up to no good if we are to have any hope of keeping them in check.Because that's the problem we're having right now. Not that Twitter isn't keeping Trump in check, but that nothing else is. That's the problem that we need to fix. And killing Twitter, or the laws that enable it to exist, will not help us get there. It will only make it much, much harder to bring about that needed change.
A recent episode of NPR's Fresh Air ran an amazing interview with Dr. David Fajgenbaum, who was diagnosed years ago with the rare Castleman's Disease, about which very little information was known (and the general prognosis was grim). Fajgenbaum talks about how he ended up in hospitals believing that he was about to die five separate times (he even had his last rites read to him), but then set up his own organization to try to crowdsource a cure. He details the full story in his book that was published last fall, called Chasing My Cure.The good news is through that crowdsourcing effort, called the Castleman Disease Collaborative Network (CDCN), they at least found a treatment that (for now...) appears to work for Fajgenbaum himself:
As Facebook's lawsuit against Israeli malware purveyor, NSO Group, continues, more facts are coming to light that undercut the spyware vendor's claims that it's just a simple software developer that can't be blamed for the malicious acts of its customers.NSO Group argued in court that the sovereign immunity that insulates the governments it sells to (including such abusive regimes as the United Arab Emirates and Saudi Arabia) similarly shields it from Facebook's desire to prevent it from using WhatsApp to deploy malware. Facebook has since pointed out NSO uses US servers that it owns or rents to deploy the malware it claims it has no involvement in deploying.More information has come to light, thanks to a whistleblower of sorts who spoke to Joseph Cox of Motherboard. The statements made by a former NSO employee further implicate the company in the dirty doings of its customers (who have targeted journalists, activists, and lawyers).
Everytime I ask anyone associated with Facebook’s new OversightBoardwhether the nominally independent, separately endowed tribunal isgoing address misuse of private information, I get the sameanswer—that’s not the Board’s job. This means thatthe Oversight Board, in addition to having such an on-the-nose propername, falls short in a more important way—its architectsimagined that content issues can be tackled substantively withoutaddressing privacy issues. Yet surely the recent scandals that haveplagued Facebook and some other tech companies in recent years haveshown us that private information issues and harmful-content problemshave become intimately connected.Wecan’t turn a blind eye to this connection anymore. We need thecompanies, and the governments of the world, and the communities ofusers, and the technologists, and the advocates, to unite behind aframework that emphasizes the deeper-than-ever connection betweenprivacy problems and free-speech problems.Whatwe need most now, as we grapple more fiercely with the public-policyquestions arising from digital tools and internet platforms, is aunifiedfield theory—or,more properly—a “GrandUnified Theory”(a.k.a. “GUT”)—of free expression and privacy.Butthe road to that theory is going to be hard. From the beginningthree decades ago when digital civil-liberties emerged as a distinctset of issues that needed public-policy attention, the relationshipbetween freedom of expression and personal privacy in the digitalworld has been a bit strained. Even the name of the first bigconference to bring all the policy people, technologists, governmentofficials, hackers, and computer cops reflected the tension. Thefirst Computers, Freedom and Privacy conference was held inBurlingame California, in 1991, made sure that attendees knew that“Privacy” was not just a kind of “Freedom”but its own thing that deserved its own special attention.Thetensions emerged early on. It seemed self-evident to most of us backthen that the relationship between freedom of expression (and freedomof assembly and freedom of inquiry) had to have some limits—includinglimits on what any of us could do with the private information aboutother people. But while it’s conceptually easy to define infairly clear terms what counts as “freedom of expression,”the consensus about what counts as a privacy interest is murkier.Because I started out as a free-speech guy, I liked thelaw-school-endorsed framework of “privacy torts,” whichcarved out some fairly narrow privacy exceptions to the broadguarantees of expressive freedom. That “privacy torts”setup meant that, at least when we talked about “invasion ofprivacy,” I could say what counted as such an invasion and whatdidn’t. Privacy in the American system was narrow and easy tograsp.Butthis wasn’t the universal view in the 1990s, and it’scertainly not the universal view in 2020. In the developed world,including the developed democracies of the European Union, thebalance between privacy and free expression has been struck in adifferent way. The presumptions in the EU favor greater protection ofpersonal information (and related interests like reputation) andsomewhat less protection of what freedom of expression. Sure, theinternational human-rights source texts like the UniversalDeclaration of Human Rights (in Article 19) may protect “freedomto hold opinions without interference and to seek, receive and impartinformation and ideas through any media regardless of frontiers.”But ranked above those informational rights (in both the UniversalDeclaration of Human Rights and the International Covenant on Civiland Political Rights) is the protection of private information,correspondence, “honor,” and reputation. This differencebalance is reflected in European rules like the General DataProtection Regulation.Theemerging international balance, driven by the GDPR, has created newtensions between freedom of expression and what we loosely call“privacy.” (I use quotation marks because the GDPRregulates not just the use of private information but also the use of“personal” information that may not be private—likeold newspaper reports of government actions to recoversocial-security debts. This was the issue in theleading “right to be forgotten” caseprior to the GDPR.) Standing by themselves, the emerginginternational consensus doesn’t provide clear rules forresolving those tensions.Don’tget me wrong: I think the idea of using international human rightsinstruments as guidance for content approaches on social-mediaplatforms has its virtues. The advantage is that in internationalforums and tribunals it gives the companies as strong a defense asone might wish in the international environment for allowing some(presumptively protected) speech to stay up in the face of criticismand removing some (arguably illegal) speech. The disadvantages areharder to grapple with. Countries will differ on what kind of speechis protected, but the internet does not quite honor borders the waysome governments would like. (Thailand'slèse-majesté isa good example.) In addition, some social-media platforms may want tocreate environments that are more civil, or child-friendly, orwhatever, which will entail more content-moderation choices andpolicies than human-rights frameworks would normally allow. Do wewant to say that Facebook or Google *can't* do this? That Twittershould simply be forbidden to taga presidential tweet as “unsubstantiated”?Some governments and other stakeholders would disapprove.Ifa human-rights framework doesn’t resolve thefree-speech/privacy tensions, what could? Ultimately, I believe thatthe best remedial frameworks will involve multistakeholderism, but Ithink they also need to begin with a shared (consensus) ethicalframework. I present the argument in condensed form here: "It’sTime to Reframe Our Relationship With Facebook.”(I also publisheda book last yearthat presents this argument in greater depth.)Cana code of ethics be a GUT of free speech and privacy? I don’tthink it can, but I do think it can be the seed of one. But it has tobe bigger than a single company’s initiative—which moreor less is the best we can reasonably hope Facebook’s OversightBoard (assuming it sets out ethical principles as a product of itswork on content cases) will ever be. I try not to be cynical aboutFacebook, which has plenty of people working on these issues whogenuinely mean well, and who are willing to forgo short-term profitsto put better rules in place. While it's true at some sufficientlyhigh level that the companies privilege profits over public interest,the fact is that once a company is market-dominant (as Facebook is),it may well trade off short-term profits as part of a grand bargainwith governments and regulators. Facebook is rich enough to absorbthe costs of compliance with whatever regimes the democraticgovernments come up with. (A more cynical read of Zuckerberg's publicwritings in the aftermath of the company’s various publicwritings, is that he wants the governments to get the rules inplace, and then FB will comply, as it can afford to do better thanmost other companies, and then FB's compliance will be a defenseagainst subsequent criticism.)Butthe main reason I think reform has to come in part at the industrylevel rather than at the company level, is that company-levelreforms, even if well-intended, tend to instantiate a public-policyversion of Wittgenstein's "privatelanguage" problem.Put simply, if the ethical rules are internal to a company, thecompany can always change them. If they're external to a company,then there's a shared ethical framework we can use to criticize acompany that transgresses the standards.Butwe can’t stop at the industry level either—we needgovernments and users and other stakeholders to be able to step inand say to the tech industries that, hey, your industry-widestandards are still insufficient. You know that industry standardsare more likely to be adequate and comprehensive when they’rebuttressed both by public approval and by law. That’s whathappened with medical ethics and legal ethics—the frameworkswere crafted by the professions but then recognized as codes thatdeserve to be integrated into our legal system. There’s aninternational consensus that doctors have duties to patients (“First,do no harm”) and that lawyers and other professions have“fiduciary duties” to their clients. I outline howfiduciary approaches might address Big Tech’s consumer-trustproblems in a series of Techdirt articles that begins here.The“fiduciary” code-of-ethics approach to free-speech andprivacy problems for Big Tech is the only way I see of harmonizingdigital privacy and free-speech interests in a way that will leavemost stakeholders satisfied (as most stakeholders are now satisfiedwith medical-ethics frameworks and with lawyers’ obligations toprotect and serve their clients). Because lawyers and doctors aregenerally obligated to tell their clients the truth (or, if for somereason they can’t, end the relationship and refer the clientsto other practitioners), and because they’re also obligated to“do no harm” (e.g., by allowing companies to use personalinformation in a manipulative way or to violate clients’privacy or autonomy), these professions already have a Grand UnifiedTheory that protects both speech and privacy in the context ofclients relationships with practitioners.BigTech has a better shot at resolving the contradictory demands on itsspeech and privacy practices if it aspires to do the same, and if itembraces an industry-wide code of ethics that is acceptable to users(who deserve client protections even if they’re not paying forthe services in question). Ultimately, if the ethics code is backedby legislators and written into the law, you have something muchcloser to a Grand Unified Theory that harmonizes privacy, autonomy,and freedom of expression.I’ma big booster of this GUT, and I’ve been making versions ofthis argument before now. (Please don’t call it “Godwin-UnifiedTheory”—having one “law”named after me is enough.) But here in 2020 we need to do more thanargue about this approach—we need to convene and begin tohammer out a consensus about a systematic, harmonized approach thatprotects human needs for freedom of expression, for privacy, and forautonomy that’s reasonably free of psychological-warfaretacticsof informational manipulation. The issue is not just false content,and it’s not just personal information—opensocietieshave to incorporate a fairly high degree of tolerance forunintentionally false expression and for non-malicious ornon-manipulative disclosure or use of personal information. But anopen society also needs to promote supporting an ecosystem—apublic sphere of discourse—in which neither the manipulativecrafting of deceptive and destructive content nor the manipulativetargeting of it based on our personal data is the norm. That’san ecosystem that will require commitment from all stakeholders tobuild—a GUT based not on gut instincts but on critical rationalism, colloquy, and consensus.
Nothing has made the FBI more irritated than its ability to break into phones it swears (often in court!) it cannot possibly get into without the device maker's assistance. The agency doesn't want third-party vendors to offer solutions and it doesn't seem to want its own technical staff to find ways to get stuff from encrypted devices. It wants the government to tell companies like Apple to do what they're told. It will accept any solution that involves a mandate, whether it's from a federal court or our nation's legislators. It will accept nothing else.The FBI and DOJ's foul mood over its phone-cracking success and its courtroom failures came to a head recently. A joint press conference announcing not-so-breaking news about the contents of the Pensacola air base shooter's phones contained a whole lot of off-target griping about a company whose only crime was selling consumer products. Here's Rianna Pfefferkorn for TechCruch:
For decades the internet has flourished on the back of innovation, creativity, adaptation, and hard work. But while this technological revolution spurred no limit of incredible inventions, services, and profit, a drumbeat of scandals have highlighted how privacy and security were often a distant afterthought — if they were thought about at all.Years later and the real cost of this apathy has become clear. We now face a daily parade of deeply entrenched privacy headaches impacting a web of interconnected industries and institutions — for which there are no quick fixes or easy answers.Enter the Tech Policy Greenhouse: a new policy forum we’re hopeful will bring more nuance, collaboration, and understanding to a privacy conversation frequently dominated by simplistic partisan bickering, bad faith arguments, and the kind of ideological ruts that can result in bad solutions, no solutions, or missing the forest for the trees entirely.When it comes to privacy and security, the penalty for our collective failure couldn’t be more obvious.The global internet of things sector routinely fails to adhere to even the most basic security and privacy standards, resulting in hackable internet-connected Barbies, refrigerators, and tea kettles. Experts note these devices collectively create a form of "invisible pollution" that is easily ignored, but that routinely puts consumers, businesses, and the health of the internet at risk.Corporations and governments alike repeatedly leave sensitive data unencrypted and openly exposed in the cloud, often failing to implement basic security measures despite ample warning. Avoidable hacks, breaches, and leaks are now a weekly affair, as are "historic" but performative government penalties that neither compensate victims nor seriously deter further malpractice.The monetization of every last shred of location, behavior, and data has become a multi-billion dollar industry where safeguards or meaningful oversight are often lacking. As a result, sensitive behavioral data is routinely abused by everyone from law enforcement,to those pretending to be law enforcement, with the first casualties often the most vulnerable among us.All of these problems require intelligent, multi-stakeholder collaboration built on the understanding that every solution has immense ramifications, there is no shortage of bad actors eager to derail effective consensus, and each and every action routinely results in unforeseen consequences.The country’s privacy issues are also inextricably linked to other problems that the United States has failed to address, from the rampant monopolization and consolidation caused by mindless merger mania, to the slow but steady erosion of meaningful antitrust oversight. The rise of one of the biggest global health threats in a century has only complicated the debate further, shining an even brighter spotlight on existing problems, while creating entirely new challenges in balancing public health and public privacy in the mass surveillance era.As we stumble collectively in the right direction, the Tech Policy Greenhouse hopes to reboot a conversation in dire need of a constructive fresh start. Over the next few weeks, you'll be hearing from a diverse chorus of activists, scholars, executives, and experts who will be tackling what they deem the most essential issues of the day. Kicking things off tomorrow will be Oregon Senator Ron Wyden, historically and repeatedly one of the leading DC voices for meaningful privacy reform.Intelligent privacy policies and solutions won’t be easy to come by, and perfect proposals are likely impossible. But we’re eager to create a platform that can help drive policy makers toward better decision making, and we’re hopeful you’ll be part of the conversation.
Content moderation at scale is impossible to do well. But, also content moderation of a world leader spewing blatant conspiracy theories may be just as difficult, and that's not even at scale.We're only partway through this week, and Donald Trump has already created a textbook's worth of content moderation questions to explore. It started with Trump going nuts with a bunch of tweets about a blatantly disproved conspiracy theory regarding a young staffer of TV host Joe Scarborough from back when he was in Congress. That staffer, Lori Klausutis, died from an undiagnosed heart condition years ago. The police and coroner found no evidence of foul play. And suddenly Trump, who used to appear on Scarborough's show back in the day, decided to spew a bunch of utter nonsense hinting strongly at the blatantly false idea that Scarborough had something to do with Klausutis' death.This is straight out of the Trump playbook. It is blatant false news (the accusation he likes to make about anyone who reports accurately on his activities). It is insane conspiracy mongering. It is hurtful. It is hateful. It is potentially dangerous. And it serves Trump in two distinct ways: as a distraction from his ongoing cataclysmic handling of the COVID-19 pandemic, and as part of his never-ending intimidation campaign against anyone in the media who dares to point out that the emperor has no clothes. As the Atlantic noted, this is malignant cruelty. It is disgusting.Many people have been arguing that Twitter should shut down Trump's account or, at the very least, delete the tweets in question. Indeed, Klausutis' husband sent a deeply moving letter to Jack Dorsey begging him to remove the President's tweets:
ChronoWatch has all the features you need. It has 16 main functions including activity tracking, sleep monitor, message and call notification, alarm, and more. It's waterproof so you can go all out with your workout routines. It comes with a 1.4" colorful display and full capacitive touch supporting taps and swipes. Simply download the Da Fit app on your phone and pair it via Bluetooth. With 3 hours of charge, this watch lasts up to 7 days of use. It's on sale for $37.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Today we're introducing something very new: the Tech Policy Greenhouse. This is a project that I've been working on for about two years now, and I'm both thrilled and relieved to finally be getting it out the door. It starts from this basic premise: many of the biggest issues facing technology and innovation today are significant challenges that have no easy answer. Every possible approach or solution (including doing nothing at all) has tradeoffs. And yet very few people seem willing to admit that, as admitting to tradeoffs in policy proposals is seen as a sign of weakness or giving in. But the issues facing innovation policy today are too big and too important to not have a truly open discussion.And having a truly open discussion about difficult policy questions means a lot more than the way the media has traditionally held these conversations: pitting two sides against one another and letting them argue it out. That rarely brings enlightenment, and mostly seems to just involve everyone digging in to their previously held beliefs. Having an open discussion about big challenges with no easy answers means being willing to dive deep into details, exploring ideas that might make you uncomfortable, and testing hypotheses that sometimes seem absurd on first glance -- but then being open to the feedback, ideas, improvements, and critiques raised about the ideas.The Tech Policy Greenhouse is an attempt to have those discussions. Think of it as something of an online symposium, where we will be bringing in a variety of experts to give their thoughts on these issues, but hopefully with the humility to recognize that what is being discussed is difficult, and understanding all of the variables at play is an impossibility. Part of this means that we'll be publishing stories that challenge us -- including some arguments that I personally disagree with -- but which we believe are being presented in good faith and for the purpose of open discussion and debate, in the hopes that whatever future policy proposals and decisions are made, they are better informed by understanding a variety of points of view, a variety of proposals, and a variety of ideas about what might work.This does not mean that the Tech Policy Greenhouse will or should be a clearing house for nonsense or half-baked ideas. There are certainly plenty of those. Instead, the goal is to get the best minds out there, willing to discuss difficult-to-impossible problems in a way that allows for greater understanding and greater humility about the eventual policy choices that are made.To help with this project, we are pleased that we have help from two excellent editors, whose names should be well recognized around here: Karl Bode and Mike Godwin. Karl, of course, has long been a writer for Techdirt, as well as a number of other tech, telco, and policy publications -- and has agreed to take on a more involved editorial role for Greenhouse. Godwin, of course, is so internet-famous that he has an entire "law" named after him. He was also the first lawyer EFF hired, as well as the General Counsel for the Wikimedia Foundation. His insights into all things related to tech policy are unmatched and always thought-provoking.For readers of Techdirt, you will see the new Greenhouse posts directly in the main feed, though they will be visually distinct (you may notice they look a bit... greener). We will continue to post regular Techdirt posts and content in the regular format, but the green posts will be from various experts and will be based around a theme that we are exploring at the time. Our plan is to roll out a few themes each year (the exact pace we'll figure out along the way). There is also now a Greenhouse tab at the top, if you want to see only the Greenhouse posts.There is one other change regarding the Greenhouse posts. While they will have our regular comment area, there will also be a separate "Featured Discussion" area, in which those who are participating in the Techdirt Greenhouse project will be encouraged to comment and discuss the other posts in the series. This is very much an experiment that might not work, but we're excited to test it out. If the panelist discussion is happening, you will see it between the post and the regular comment section.Our inaugural topic is digital privacy, because we decided to jump right into the deep end of extremely important, but controversial, problems with no easy solutions. Karl will introduce the overall topic in another introductory post, followed by Godwin's introduction regarding his thoughts on why the privacy debate needs to be reframed. And then, starting tomorrow and over the next few weeks, you'll see a variety of Greenhouse posts from experts interspersed among the regular Techdirt content. We are also open to more such posts, so if you have expertise and would like to contribute, please feel free to contact us.Also, I should address the elephant in the greenhouse: this project is currently sponsored by Google, Twitter, and Protocol Labs. For some, this will discredit the entire project. We set out to try to launch this project with only grants from foundations and without corporate sponsorship, but so far have not been able to find foundations willing to support it (if you know of any who might be interested, or if you happen to work for one, please also reach out and let us know). Given that unfortunate lack of interest from foundations so far, we were happy that these three companies were willing to step up and sponsor the launch of this effort which, again, is a few years in the making. From the beginning, we were upfront that the whole point of this project is to discuss challenging tech policy questions, and that if any company sponsored this project, they would probably disagree heavily with some of the content, but that we felt that enabling those open and thoughtful discussions was good for the future of innovation itself -- and all three sponsors seemed to recognize the value of the conversations, even when some of the content might go against the company's own interests (indeed, the interests of the three sponsors are not aligned with one another in many cases, and sometimes diametrically opposed).Still, if this concerns you, I only ask that you judge the content on its own merits. The whole point of this project is to take us all out of our comfort zone. I hope that people everywhere, no matter how they feel about various tech policy questions, can at least recognize that thoughtful conversation and debate are important to coming up with better policy overall. I look forward to this inaugural discussion on privacy -- and I hope everyone here will welcome it.
Big wireless carriers haven't been exactly honest when it comes to the looming fifth-generation wireless standard (5G). Eager to use the improvements to charge higher rates and sell new gear, carriers and network vendors are dramatically over-hyping where the service is actually available, and what it can actually do. Some, like AT&T, have gone so far as to actively mislead customers by pretending that its existing 4G networks are actually 5G. AT&T took this to the next level last year by issuing phone updates that changed the 4G icon to "5GE" on customer phones, despite the fact that actual 5G isn't really available.Sprint sued AT&T last year for being misleading, but the suit was settled (likely so Sprint could focus on its merger with T-Mobile) without much coming of it. AT&T's competitors also complained via the Better Business Bureau's National Advertising Division (NAD), which is a "self-regulatory" system designed to help companies settle disputes without the involvement of regulators. After a year of bickering and appeals, NARB (the enforcement arm of NAD) finally ruled last week that the practice was misleading and the ads should be discontinued:
We were promised no more deaths by May 15th, but that hasn't happened. With no one 100% sure what the best options are going forward, this is how states are handling the task of (lol) cautiously "reopening." A long press conference held by the Trump administration said states could reopen if they hit a number of checkpoints, including a certain amount of testing and a plateau/drop in positive cases.A number of states appear to have stopped listening after the word "reopen." Whether or not they've hit the CDC's checkpoints does not appear to matter. A collective shrug about deaths and infections was issued by a number of governors, some of whom are (justifiably) tired of gun-toting residents showing up at the state house to protest their lack of access to haircuts and house parties.When the data doesn't match the narrative, there's only one thing to do: fuck up the data. And the person who's compiling it. Florida has lots of sunny beaches that are currently too empty to satisfy sun junkies who wish to take advantage of the lengthy shorelines contained in America's Penis. COVID stats weren't exactly lending themselves to the "it's fine" narrative the governor wanted to push. So, the state government did some pushing of its own.
As the debate continues over the renewal of some Patriot Act provisions for NSA surveillance techniques, the House now has a chance to correct a failure by the Senate, by one measly vote, to require a warrant for the FBI to go sifting through your internet histories that the NSA scooped up along the way. The intelligence community refuses to reveal how often this is done, but Senator Wyden is indicating that it's a lot more than you think -- and he's been right pretty much every time he's made those suggestions.It's now up to the House, and while Rep. Lofgren had a version of the warrant requirement amendment, some petty political squabbling from Democratic leadership threatened to quash it -- mainly by Rep. Adam Schiff inserting a massive loophole to allow for more warrantless surveillance. Earlier on Tuesday it was reported that, after a long weekend of haggling, it appeared that a vote will be allowed on Lofgren's Amendment and that the language had been cleared up to the point that even Senator Wyden backed it:
It appears that at least one judge handling Devin Nunes' various SLAPP suits in Virginia has caught on to at least some of what's going on here. Judge Robert E. Payne has now transferred two of his lawsuits -- the ridiculous defamation filing against CNN and the even sillier SLAPP suit against the Washington Post -- to better venues. In both cases, the judge seems pretty fed up with Nunes' lawyer, Stephen Biss, opening both by quoting what was said to Biss in yet another one of his silly SLAPP suits:
The Senate tried and failed to erect a warrant requirement for the FBI's collection of US citizens' internet browsing data. The amendment to the FISA reauthorization fell one vote short -- something that could have been avoided by having any of the four missing Senate supporters show up and actually support the thing. The House has a chance to pass this amendment before sending the bill to the president, but they've decided to engage in some unproductive infighting instead.As it stands now, it still stands the way it has always stood: the FBI can get this information without a warrant. If we can't have this amendment, maybe we can have some answers about the FBI's use of this power. Senator Ron Wyden has sent a letter to the Director of National Intelligence asking how often government agencies have spied on Americans' internet usage. The answer will probably arrive sometime between "years from now" and "never," given how enthused the DNI usually is about discussing domestic surveillance originating from the Foreign Intelligence Surveillance Act.Since there doesn't seem to be any good reason to allow the FBI to continue this warrantless collection, surveillance supporters in Washington have decided to craft some bad ones. Dell Cameron reports for Gizmodo that certain Senators think a warrant requirement allows the terrorists to win.
Last week, we wrote about one of the biggest, glaring flaws in the Copyright Office's long awaited report on the DMCA 512's safe harbors was its refusal to recognize how frequently it's abused to take down legitimate works. As if on cue, over the weekend, the NY Times has quite the story about a feud in (I kid you not), wolf-kink erotica fan fiction, that demonstrates how the DMCA is regularly abused to punish and silence people for reasons that have nothing to do with copyright.The full NY Times article is worth reading, describing a still ongoing legal fight between two fanfic authors who wrote stories building on some apparently common tropes in the wolf-erotica fiction genre. One author sued another, but, as the article notes, all of the supposedly "copied" elements are common throughout the wider genre:
Over the weekend, the Wall Street Journal reported that "President Trump is considering establishing a panel to review complaints of anti-conservative bias on social media." That story is likely behind a paywall, though Fox News (natch) reposted most of it and lots of tech news sites wrote up their own versions of the report.The basis is exactly what you think it is. A bunch of Trump supporters have been falsely insisting that social media companies are unfairly "biased" against conservatives. There is exactly zero evidence to date to support this. There are a few anecdotes of whiny assholes, who violated terms of service, losing service, and a few anecdotes of just not very good content moderation (though, those seem to fall pretty broadly across the political spectrum). There is no indication that any of the moderation activity is unfairly targeting conservatives or even that there is any "bias" at all. I'm sure some people will rush to the comments here with one of two reactions: they will either call me "blind" and complain that I'm simply not looking around (though they will present no actual evidence) or they will cite a few meaningless anecdotes, ignoring that a few anecdotes on platforms that have to make literally millions of moderation choices, is not evidence of bias.But, more importantly: the government can't do anything even if they were biased. And this is where all of the reporting I've seen so far falls down. Most clearly, the government simply cannot force platforms to moderate in a certain way. That would violate the 1st Amendment. So even if a panel is formed, it couldn't actually do anything to change things, beyond just being an annoying pest. But, it seems like the media should be making this clear. Any panel cannot force internet companies to treat political viewpoints in some different manner. That's a blatant 1st Amendment problem.Separately, even the formation of the panel may very well present a 1st Amendment problem on its own, because it is clearly the government using its will to try to pressure private companies into treating certain political viewpoints differently. Remember what Judge Posner wrote in Dart v. Backpage, in which he dinged a sheriff, Thomas Dart, for merely sending a letter that was vaguely threatening to the free speech rights of an internet platform: " Some public officials doubtless disapproveof bars, or pets and therefore pet supplies, or yard sales, orlawyers,... or men datingmen or women dating women—but... it wouldbe a clear abuse of power for public officials to try to eliminate them not by expressing an opinion but by threatening... third parties, with legal or other coercive governmental action."Just because government officials are upset with 1st Amendment protected speech choices of the companies, that does not mean they can do something that is obviously a threat of coercive action.Anyone -- including the Wall Street Journal -- reporting on this stuff owes it to their readers to make that clear. Tragically, so far none of the reports I've seen have done so.
Marketing isn't simple. The 2020 Complete Facebook Marketing Masterclass is here to give you step-by-step simple strategies to gain targeted followers This class will start with the very basics of creating professional Facebook profiles and then progress to proven tips and tricks for marketing. You'll learn how to use targeted ads, how to harness use Facebook groups to create a community, and more. It's on sale for $14.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The best place for a messenger is six feet under, according to the governor of Arkansas, Asa Hutchinson. Despite being a founding chair of Governors for CS [Computer Science] (according to Slashdot), Hutchinson has decided to blame a security researcher for the state's inability to properly secure one of its websites. Lindsey Millar, who reported the breach exposing the sensitive information of the site's users, reports that Governor Hutchinson is trying to villainize the person who stumbled upon the unexpected data flow.It all started innocently enough when a programmer, who had attempted to apply for financial aid via Arkansas' Pandemic Unemployment Assistance website, discovered it was exposing Social Security numbers and bank account numbers. This person got in touch with Millar, who brought it to the attention of the state.That's where things went extremely wrong.
Late last week, news emerged that the DOJ would likely be bringing a massive antitrust lawsuit against Google. Reports suggest this is the culmination of a full year of saber rattling by Bill Barr, who has made "antitrust inquiries" into "big tech" a top priority at the DOJ:
This week, our first place winner on the insightful side comes from That One Guy in response to FOSTA supporters making an unsurprising pivot to killing off pornography:
Food delivery services always felt a bit wonky to me. I'm usually not terribly old fashioned about most things, but I generally understood that some restaurants delivered and some did not and that that was mostly fine. Along came food delivery services to bring us food from places that didn't deliver and that was mostly fine, too. But lately it's starting to become clear that somewhere in the ecosystem of venture capitalist funding and food delivery services, something is broken. We'll explore the larger issues in a separate post, but one great example of how janky this is getting is how one pizzeria owner managed to make a nice profit by buying his own pizzas from DoorDash. Confused? Well, buckle up.
We've noted several times that the FTC's settlement over the Equifax hack that exposed the public data of 147 million Americans was little more than a performative joke. While much was made of the historic fine levied against the company, the FTC's settlement failed to provide impacted victims much of anything outside of a sad chuckle.The agency originally promised that impacted users would be able to nab 10 years of free credit reporting or a $125 cash payout if users already subscribed to a credit reporting service. But it didn't take long for the government to backtrack, claiming it was surprised by the number of victims interested in modest compensation, while admitting the settlement failed to set aside enough money to pay even 248,000 of the hack's 147 million victims. Even the credit reporting was relatively useless given such offers have been doled out the last seventy times consumers were impacted by a company's shaky security and privacy standards.While consumers didn't see their promised compensation, US banks are facing no such hurdles. The company this week agreed to shell out $5.5 million to thousands of banks and credit unions who say they were harmed by the targeted hack of Equifax customers. The full agreement with the banks also doles out an additional $25 million to help beef up security, with Equifax also covering the banks' administrative costs, attorney fees, and assorted expenses.But while the banks are now covered, the actual victims of the hack attack remain lost in the bureaucratic mire:
Also, following on my last post: since the First Amendment protects site moderation and curation decisions, why all the calls to get rid of CDA 230’s content moderation immunity?Having listened carefully and at length to the GOP Senators and law professors pitching this, the position seems to be a mix of bad faith soapboxing (“look at us take on these tech libs!”) and the idea that sites could be better held to account -- contractually, via their moderation codes -- if the immunity wasn’t there.This is because the First Amendment doesn’t necessarily bar claims that various forms of “deplatforming” -- like taking down a piece of content, or suspending a user account -- violate a site’s Terms of Use, Acceptable Use Policy, or the like. That’s the power of CDA 230(c)(2): it lets sites be flexible, experiment, and treat their moderation policies more as guidelines than rules.Putting aside the modesty of this argument (rallying cry: “let’s juice breach-of-contract lawsuits against tech companies”) and the irony of “conservatives” arguing for fuller employment of trial attorneys, I’ll make two observations:First of all, giving people a slightly-easier way to sue over a given content moderation decision isn’t going to lead to sites implementing a “First Amendment standard.” Doing so -- which would entail allowing posts containing all manner of lies, propaganda, hate speech, and terrorist content — would make any such site choosing this route an utter cesspool.Secondly, what sites WOULD do in response to losing immunity for content moderation decisions is adopt much more rigid content moderation policies. These policies would have less play in them, less room for exceptions, for change, for context.Don’t like our content moderation decision? Too bad; it complies with our policy.You want an exception? Sorry; we don’t make exceptions to the policy.Why not? Because some asshole will sue us for doing that, that’s why not.Have a nice day.CDA 230’s content moderation immunity was intended to give online forums the freedom to curate content without worrying about this kind of claim. In this way, it operates somewhat like an anti-SLAPP law, by providing the means for quickly disposing of meritless claims.Though unlike a strong anti-SLAPP law, CDA 230(c)(2) doesn’t require that those bringing such claims pay the defendant’s attorney fees.Hey, now THERE’s an idea for an amendment to CDA 230 I could get behind!Reposted from the Socially Awkward blog.
More than four years ago, the Copyright Office kicked off a project to do a big "study" on Section 512 of the DMCA, better known as either the "notice-and-takedown" section of copyright law, or the "safe harbors" section for websites. The Office took comments, held a few, somewhat bizarre "roundtables" (that we participated in)... and then... silence. Years of silence. Until yesterday when it finally released the report. It's 250 pages and there's a lot in there -- and we're likely to have a few more posts on it as we dig into the details, but to kick it off, I wanted to highlight just how bizarre a report it is, in that the authors don't seem to realize or ever acknowledge that the purpose of copyright law (and even this section) is to create the best possible services for the public.Instead, the report seems to frame the entire Section 512 debate as a battle between the legacy copyright industry and giant internet companies. From the executive summary:
It's getting absurd to have to do this every few weeks, but the media keeps publishing blatantly wrong things about Section 230 of the Communications Decency Act. You would think that after the NY Times had to roll back its own ridiculous headline blaming "hate speech" on the internet on Section 230, only to have to say "oops, actually, it's the 1st Amendment," that other publications would take the time to get things straight and recognize that nearly everything they're complaining about is actually the 1st Amendment, not Section 230. Section 230 merely protects the 1st Amendment, by making it easier to get out of SLAPPish lawsuits earlier in the process.Yet, Newsweek apparently did not take note, and agreed to publish an op-ed by a group that was set up with former Republican Congressional staffers to deliberately push FUD and nonsense about successful internet companies called the "Internet Accountability Project" (which is not accountable for its own funding). IAP has been targeting Section 230 pretty much from day one, and this Newsweek op-ed is par for the course in that nearly everything it claims is wrong, misleading, or just ridiculous. First it describes a few examples of both Facebook and Google moderating potentially dangerous misinformation campaigns about COVID-19 and claims that this is some sort of evil censorship:
The UV Sterilizer lets you clean your phone while charging it. Using UV-C light, it kills germs and bacteria up to 99.99% without all the harmful heat, liquid, or chemicals. It also has Qi inductive charging technology. It fits phones up to 6.2" and works also with watches, glasses, keys, earphones, and more. It's on sale for $50.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
At the same time the FBI director was claiming the private sector (other than Apple) couldn't help agents break into encrypted iPhones, the private sector was once again demonstrating it could do exactly that. Chris Wray's remarks to the press centered less on the underwhelming news that the FBI had conclusively linked the Pensacola Air Base shooter to al Qaeda than on Apple's supposed unhelpfulness.The FBI claimed it had found a way to access data on the shooter's phones, but provided no details on its method. Maybe agents brute forced a passcode. Maybe they just found a side door that allowed them to exfiltrate the data they were looking for. Whatever it was, it wasn't something provided by a vendor. In fact, Chris Wray went so far as to claim the media was misleading the public about the availability of encryption-breaking/bypassing tech.
For years a growing number of US towns and cities have been forced into the broadband business thanks to US telecom market failure. Frustrated by high prices, lack of competition, spotty coverage, and terrible customer service, some 750 US towns and cities have explored some kind of community broadband option. And while the telecom industry routinely likes to insist these efforts always end in disaster, that's never actually been true. While there certainly are bad business plans and bad leaders, studies routinely show that such services not only see the kind of customer satisfaction scores that are alien to large private ISPs, they frequently offer better service at lower, more transparent pricing than many private providers.Undaunted, big ISPs like AT&T and Comcast have waged a multi-pronged, several decade attack on such efforts. One, by passing protectionist laws in roughly 20 cities either hamstringing or banning cities from building their own networks, often in cases where private ISPs refuse to expand service. Two, by funding economists, consultants, and think tankers (usually via proxy organizations) happy to try and claim that community broadband is always a taxpayer boondoggle -- unnecessary because private sector US broadband just that wonderful.The latest example of the latter comes via the Taxpayer Protection Alliance, a nonprofit that insists its focus is "holding government accountable," but is routinely backed by telecom giants like AT&T, which, for obvious reasons, are eager to paint an inaccurate picture of what's actually happening. The group's latest study, "GON with the Wind: The Failed Promise of Government Owned Networks Across the Country," claims to take a look at 30 examples of community broadband networks, with the heavy implication that the majority of them have failed -- proving that community broadband is always bad and private sector broadband is always good:
Just last week, Ben Thompson's excellent Stratechery site had a great post describing the important differences between open and free, specifically with regards to podcasts. The occasion was his decision to launch a paid-for, but still "open" podcast. And he explains how there are important differences (in particular) between "open and for-pay" vs. "closed and free." Open and for-pay means that it's not locked down, and can work on a variety of different setups and open platforms. The payment is part of the business model, but the openness gives the end-users more control and freedom. In the software world, you might talk about this as "free as in speech" rather than "free as in beer." The "free, but closed" model is one where you can get the products for free -- but they're locked in a proprietary system. Facebook is an example of free, but closed, for example.Thompson was talking in particular about his own podcast (open, but paid) as compared to Spotify's podcast strategy (free, but closed). Last year, when Spotify purchased a bunch of podcast companies, we worried that it foretold the end of the open world of podcasting. You can get a Spotify account for free, but unlike most podcast apps, you can't get any podcast you want via Spotify. Spotify has to agree to host it, and as a podcast you have to "apply" (indeed, Techdirt's own podcast was initially rejected by Spotify, though has since been let in). That's a "closed, but free" setup. Most podcasts are both open and free -- published as open MP3 files, using an open RSS feed that any regular podcast app can grab.Spotify, so far, hadn't done much to close off the podcasts that it had purchased, but perhaps that's changing. Earlier this week it was announced that one of (if not) the most popular podcasts in the world, Joe Rogan's, would now be moving exclusively to Spotify. News reports have said that Spotify paid over $100 million to get Rogan's podcast on board, while some have put the number closer to $200 million.While it's totally understandable why Rogan would take that deal (who wouldn't?), it does remain a sad day for the concept of an open internet. When we lock up content into silos, we all lose out. The entire concept of podcasts came from the open nature of the internet -- combining MP3s and RSS to make it all work seamlessly and enabling anyone to just start broadcasting. The entire ecosystem came out of that, and putting it into silos and locking it up so that only one platform can control it is unfortunate. I'm sure it will get many people to move to Spotify's podcasting platform, though, and that means those that do offer open podcasting apps (most others) will suffer, because most people aren't going to want to use two different podcast apps.Even if the initial economics make sense, it still should be seen as a sad day for the open internet that enabled podcasting to exist in the first place.
If you're visiting our site today (and I guess, forever into the future if you don't click "got it") you will now see a notification at the bottom of the site saying that this site uses cookies. Of course, this site uses cookies. Basically any site uses cookies for all sorts of useful non-awful, non-invasive purposes. We use cookies, for example, to track your preferences (including when you turn off ads on the site, which we let you do for free). In order to make sure those ads are gone, or whatever other preferences stay in place, we use cookies.For the last few years, of course, you've probably seen a bunch of sites pop up boxes "notifying" you that they use cookies. For the most part, this has to do with various completely pointless EU laws and regulations that probably make regulators feel good, but do literally nothing to protect your privacy. Worst are the ones that suggest that by continuing on the site you've made some sort of legal agreement with the site (come on...). These cookie notification pop ups do not help anyone. They don't provide you particularly useful information, and they don't lead you to a place that is more protective of your actual privacy. They just annoy people, and so people ignore them, leave the site, or (most commonly) just "click ok" to get the annoying bar or box out of the way to get to the content they wanted to see in the first place.Here's the stupendously stupid thing about all of this: you are already in control. If you don't like cookies, your browser gives you quite a lot of control over which ones you keep, and how (and how often) you get rid of them. Some browsers, like Mozilla's Firefox Focus browser, automatically discard cookies as soon as you close a page (it's great for mobile browsing, by the way). Of course, that leads to some issues if you want to remain logged in on certain pages, or to have them remember preferences, but for those you can use a different browser or change various settings. It's nice that the power to handle cookies is very much up to you. We here at Techdirt like it when the control is pushed out to the ends of the network, rather than controlled in the middle.But, because it makes some privacy regulators feel like they've "done something", they require such a pointless "cookie notification" on sites. Recently, one of our ad providers told us that we, too, needed to include such a cookie notification, or else we'd lose the ability to serve any ads from Google, who (for better or for worse) is one of the major ad providers out there. We did not get a clear explanation for why we absolutely needed to add this annoying notification that doesn't really help anyone, but the pleas were getting more and more desperate, with all sorts of warnings. We even asked if we could just turn off the ads entirely (which would, of course, represent something of a financial hit) and they seemed to indicate that because we still use other types of cookies (again, including cookies to say "don't show this person any ads"), we had to put up the notification anyway.The last thing we were told is that if we didn't put up a cookie notification within a day, Google would "block us globally." I'm honestly not even sure what this means. But, either way, we're now showing you a cookie notification. It's silly and annoying and I don't think it serves your interests at all. It serves our interests only inasmuch as it gets our partner to stop bugging us. Don't you feel better?You can click "got it" and make it go away. You can not click it and it will stay. You can block cookies in your browser, or you can leave them. You can toss out your cookies every day or every week (not necessarily a bad practice sometimes). You're in control. But we have to show you the notification, and so we are.
So if the First Amendment protects site moderation and curation decisions, why are we even talking about “neutrality?”It’s because some of the bigger tech companies -- I’m looking at you, Google and Facebook -- naively assumed good faith when asked about “neutrality” by congressional committees. They took the question as inquiring whether they apply neutral content moderation principles, rather than as Act I in a Kabuki play where bad-faith politicians and pundits would twist this as meaning that the tech companies promised “scrupulous adherence to political neutrality” (and that Act II, as described below, would involve cherry-picking anecdotes to try to show that Google and Facebook were lying, and are actually bastions of conversative-hating liberaldom).And here’s the thing -- Google, Twitter, and Facebook probably ARE pretty damn scrupulously neutral when it comes to political content (not that it matters, because THE FIRST AMENDMENT, but bear with me for a little diversion here). These are big platforms, serving billions of people. They’ve got a vested interest in making their platforms as usable and attractive to as many people as possible. Nudging the world toward a particular political orthodoxy? Not so much.But that doesn’t stop Act II of the bad faith play. Let’s look at how unmoored from reality it is.Anecdotes Aren’t DataAnecdotes -- even if they involve multiple examples -- are meaningless when talking about content moderation at scale. Google processes 3.5 billion searches per day. Facebook has over 1.5 billion people looking at its newsfeed daily. Twitter suspends as many as a million accounts a day.In the face of those numbers, the fact that one user or piece of content was banned tells us absolutely nothing about content moderation practices. Every example offered up -- from Diamond & Silk to PragerU -- is but one little greasy, meaningless mote in the vastness of the content moderation universe.“‘Neutrality?’ You keep using that word . . .”One obvious reason that any individual content moderation decision is irrelevant is simple numbers: a decision representing 0.00000001 of all decisions made is of absolutely no statistical significance. Random mutations -- content moderation mistakes -- are going to cause exponentially more postings or deletions than even a compilation of hundreds of anecdotes can provide. And mistakes and edge cases are inevitable when dealing with decision-making at scale.But there’s more. Cases of so-called “political bias” are, if it is even possible, even less determinative, given the amount of subjectivity involved. If you look at the right-wing whining and whinging about their “voices being censored” by the socialist techlords, don’t expect to see any numerosity or application of basic logic.Is there any examination of whether those on “the other side” of the political divide are being treated similarly? That perhaps some sites know their audiences don’t want a bunch of over-the-top political content, and thus take it down with abandon, regardless of which political perspective it’s coming from?Or how about acknowledging the possibility that sites might actually be applying their content moderation rules neutrally -- but that nutbaggery and offensive content isn’t evenly distributed across the political spectrum? And that there just might be, on balance, more of it coming from “the right?”But of course there’s not going to be any such acknowledgement. It’s just one-way bitching and moaning all the way down, accompanied with mewling about “other side” content that remains posted.Which is, of course, also merely anecdotal.Reposted from the Socially Awkward blog.
FBI Director Chris Wray's potshots at Apple during the joint press conference about the Pensacola Air Base shooting weren't the only ones delivered by a federal employee. Famous anti-encryptionist/current DOJ boss Bill Barr made even more pointed comments during his remarks, mostly glossing over the FBI's brilliant discovery that the shooter was linked to al Qaeda -- something al Qaeda had claimed shortly after the shooting took place.The DOJ never got the court battle it wanted. Its second attempt to talk a court into compelled decryption never gained momentum and FBI techs were eventually able to do the thing the DOJ couldn't make Apple do: access the phones' contents. Barr's comments had very little to do with the supposed matter at hand: the investigation of a shooting on a US military base. Instead, Barr gave perfunctory thanks to the hardworking men and women of the FBI before moving on to declaring Apple an enemy of the people, if not an actual enemy of the state.Here's the first smear, which insinuates device encryption is a criminal co-conspirator.
We've talked for many years now about the overreach of the GDPR and how its concepts of "data protection" often conflict with both concepts of free expression and very common every day activities. The latest example, first highlighted by Neil Brown, is that a Dutch court has said that a grandmother must delete photos of her grandkids that she posted to Facebook and Pinterest, because it violates the GDPR. There is, obviously, a bit more to the case, and it involves a family dispute involving the parents and the grandmother, but, still, the end result should raise all sorts of questions.And while many EU data protections folks are saying this was to be expected based on earlier EU rulings regarding the GDPR, it doesn't make the result any less ridiculous. As the BBC summarizes:
You would think that House Democrat leaders like Speaker Pelosi and Reps. Adam Schiff and Jerry Nadler, who helped lead the impeachment effort against President Trump, would leap at the chance to stop Trump and the FBI from conducting warrantless searches of Americans' internet browsing habits. Instead, they seem to be supporting it and are trying to scapegoat Rep. Zoe Lofgren -- who is trying to safeguard our internet surfing -- because she's dared to push for a fix to the law. At issue is the FISA renewal bill, in which Congress has decided to take the FBI's "backdoor searches" out of the backdoor and moved them around the front: explicitly allowing the FBI to go trawling through internet/browsing/search histories collected without a warrant by the NSA.As we've discussed, over in the Senate, Senator Ron Wyden and Steve Daines pushed for a pretty straightforward amendment to say that these searches should require a warrant (yes, the 4th Amendment alone should require that, but... ) and their amendment fell just one vote short. So even though significantly more than half of the Senate voted to require a warrant, the bill that passed out of the Senate does not require a warrant. The ball then moved to the House side, and you'd think that leadership there should just put in a similar amendment -- and, indeed, Rep. Lofgren had one ready to go. This shouldn't be a surprise. Lofgren has fought to end backdoor searches for years.However, a story in Politico argued that Lofgren's Amendment somehow threatened to "blow up" a well-orchestrated Congressional move to make sure the FBI could keep spying without a warrant. Dell Cameron, over at Gizmodo, breaks down just how ridiculous this whole story is, and how it appears that it's actually Speaker Pelosi and Rep. Schiff who want to let the FBI warrantless searches continue, and they've strong-armed Rep. Nadler into supporting this position (Nadler, who is terrible on copyright issues, usually is pretty good on civil liberties), while trying to pin any "blame" on Lofgren.Much of the story covers shenanigans to box out Lofgren back in February when she sought to add her version of the Wyden/Daines Amendment:
In the Ultimate 2020 White Hat Hacker Bundle, you will learn from scratch how to master ethical hacking and cybersecurity. You'll develop an understanding of the threat and vulnerability landscape through threat modeling and risk assessments, you'll learn network hacking techniques and vulnerability scanning, you'll find out how to protect your devices with end-point protection, and much more. The bundle is on sale for $39.90.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The Wyden Siren is blaring. If you're unfamiliar, Senator Wyden has a pretty long history of what is generally known as Wyden Siren letters to the Director of National Intelligence. Wyden, one of the few Senators who has consistently shown a belief in protecting the civil liberties of Americans, has spent over a decade sending letters to the Director of National Intelligence that always ask questions about how often certain very sketchy surveillance techniques are being used. And, every time he does so, it tends to be a signal that the method in question is used to a massive degree, while the intelligence community is running around insisting that it's nothing to be concerned about. If we've learned one thing, however, in all these years, it's that when Wyden asks these types of questions, it means you'd best pay attention, and the activity in question is happening way more than anyone thought before.The latest, as first spotted by Zack Whittaker, is Wyden's letter to Acting Director of National Intelligence, Richard Grenell asking about how often the intel community and the FBI use their powers to spy on web surfing behavior:
For the better part of two years, Verizon has insisted that fifth-generation wireless (5G) would revolutionize everything. Simply by upgrading from 4G to 5G, Verizon repeatedly insisted, we'd usher forth a "fourth industrial revolution," resulting in smarter cars, smarter cities, and an endless array of innovation. 5G technology was so incredible, Verizon insisted, that it would also quickly usher forth incredible new cancer cures, allowing doctors to conduct remote heart surgery while wearing VR/AR headsets from the back of a rickshaw.Granted 5G was never actually that exciting. While an important update in terms of faster and more reliable mobile networks, the technology was rushed to market in such a way that coverage was sparse and overstated, 5G handsets were expensive and clunky, and service plans were equally pricey. Worse, recent studies have suggested that because the U.S. lacks a lot of the mid-band spectrum available in other countries (read: policy failure), 5G in the United States is going to be significantly slower and not all that much different than 4G (at least for a while).On the heels of one particularly damning 5G report on slow US speeds by OpenSignal, Verizon is now suddenly attempting to temper enthusiasm, noting that 5G at first won't be all that much different from existing 4G networks:
All things are cyber these days, including handy government tools meant to shield thin-skinned leaders from criticism. For a guy who goes around bragging about killing drug dealers, Philippines President Rodrigo Duterte seems oddly unable to handle being called what he is.
One of the dangers when we talk about esports and its rapid growth, particularly during this pandemic, is that those not in the know can see this as hobbyists touting their own hobby. It's understandable to some degree, what with this industry being both in its infancy stage and growing exponentially in speed. Still, while we've had several posts lately focusing on how esports is happily filling the void of traditional live sports during the COVID-19 pandemic, it is worth remembering that this isn't just a hobby any longer. It's an economy in and of itself.And that, to put a fine point on it, means jobs. Lots and lots of jobs, actually, and economic growth going along with it. NBC has an illuminating post on just how fast streaming companies are expanding to keep up with the esports demand.
Apple and Google have now released their update to their mobile operating systems to include a new capability for COVID-19 exposure notification. This new technology, which will support contact tracing apps developed by public health agencies, is technically impressive: it enables notifications of possible contact with COVID-positive individuals without leaking any sensitive personal data. The only data exchanged by users are rotating random keys (i.e., a unique 128-digit string of 0s and 1s) and encrypted metadata (i.e., the protocol version in use and transmitted power levels). Keys of infected individuals, but not their identities or their locations, are downloaded by the network upon a positive test with the approval of a government-sanctioned public health app.Despite being a useful tool in the pandemic arsenal and adopting state-of-the-art techniques to protect privacy, the Apple-Google system has drawn criticism from several quarters. Privacy advocates are dreaming up ways the system could be abused. Anti-tech campaigners are decrying “tech solutionism.” None of these critiques stands up to scrutiny.How the exposure notification API worksTo get a sense for how the Apple-Google exposure notification system works, it is useful to consider a hypothetical system involving raffle tickets instead of Bluetooth beacons. Imagine you were given a roll of two-part raffle tickets to carry around with you wherever you go. Each ticket has two copies of a randomly-generated 128-digit number (with no relationship to your identity, your location, or any other ticket; there is no central record of ticket numbers). As you go about your normal life, if you happen to come within six feet of another person, you exchange a raffle ticket, keeping both the ticket they gave you and the copy of the one you gave them. You do this regularly and keep all the tickets you’ve exchanged for the most recent two weeks.If you get infected with the virus, you notify the public health authority and share only the copies of the tickets you’ve given out—the public health officials never see the raffle tickets you’ve received. Each night, on every TV and radio station, a public health official reads the numbers of the raffle tickets it has collected from infected patients (it is a very long broadcast). Everyone listening to the broadcast checks the tickets they’ve received in the last two weeks to see if they’ve “won.” Upon confirming a match, an individual has the choice of doing nothing or seeking out a diagnostic test. If they test positive, then the copies of the tickets they’ve given out are announced in the broadcast the next night. The more people who collect and hand out raffle tickets everywhere they go, and the more people who voluntarily announce themselves after hearing a match in the broadcast, the better the system works for tracking, tracing, and isolating the virus.The Apple-Google exposure notification system works similarly, but instead of raffle tickets, it uses low-power Bluetooth signals. Every modern phone comes with a Bluetooth radio that is capable of transmitting and receiving data over short distances, typically up to around 30 feet. Under the design agreed to by Apple and Google, iOS and Android phones updated to the new OS, that have their Bluetooth radios on, and that have a public health contact tracing app installed will broadcast a randomized number that changes every 10 minutes. In addition, phones with contact tracing apps installed on them will record any keys they encounter that meet criteria set by app developers (public health agencies) on exposure time and signal strength (say, a signal strength correlating with a distance up to around six feet away). These parameters can change with new versions of the app to reflect growing understanding of COVID-19 and the levels of exposure that will generate the most value to the network. All of the keys that are broadcast or received and retained are stored on the device in a secure database.When an individual receives a positive COVID-19 diagnosis, she can alert the network to her positive status. Using the app provided by the public health authority, and with the authority’s approval, she broadcasts her recent keys to the network. Phones download the list of positive keys and check to see if they have any of them in their on-device databases. If so, they display a notification to the user of possible COVID-19 exposure, reported in five-minute intervals up to 30 minutes. The notified user, who still does not know the name or any other data about the person who may have exposed her to COVID-19, can then decide whether or not to get tested or self-isolate. No data about the notified user leaves the phone, and authorities are unable to force her to take any follow-up action.Risks to privacy and abuse are extremely lowAs global companies, Google and Apple have to operate in nearly every country around the world, and they need to set policies that are robust to the worst civil liberties environments. This decentralized notification system is exactly what you would design if you needed to implement a contact tracing system but were concerned about adversarial behavior from authoritarian governments. No sensitive data ever leaves the phone without the user’s express permission. The broadcast keys themselves are worthless, and cannot be tied back to a user’s identity or location unless the user declares herself COVID-positive through the public health app.Some European governments think Apple and Google’s approach goes too far in preserving user privacy, saying they need more data and control. For example, France has indicated that it will not use Apple and Google’s API and has asked Apple to disable other OS-level privacy protections to let the French contact tracing app be more invasive (Apple has refused). The UK has also said it will not use Apple and Google’s exposure notification solution. The French and British approach creates a single point of failure ripe for exploitation by bad actors. Furthermore, when the government has access to all that data, it is much more likely to be tempted to use it for law enforcement or other non-public health-related purposes, risking civil liberties and uptake of the app.Despite the tremendous effort the tech companies exerted to bake privacy into their API as a fundamental value, it is not enough for some privacy advocates. At Wired, Ashkan Soltani speculates about a hypothetical avenue for abuse. Suppose someone set up a video camera to record the faces of people who passed by, while also running a rooted phone—one where the user has circumvented controls installed by the manufacturer—that gave the perpetrator direct access to the keys involved. Then, argues Soltani, when a COVID-positive key was broadcast over the network, the snoop could be able to correlate it with the face of a person captured on camera and use that to identify the COVID-positive individual.While it is appropriate for security researchers like Soltani to think about such hypothetical attacks, the real-world damage from such an inefficient possible exploit seems dubious. Is a privacy attacker going to place cameras and rooted iPhones every 30 feet? And how accurate would this attack even be in crowded areas? In a piece for the Brookings Institution with Ryan Calo and Carl Bergstrom, Soltani doubles down, pointing out that “this ‘decentralized’ architecture isn’t completely free of privacy and security concerns” and “opens apps based on these APIs to new and different classes of privacy and security vulnerabilities.”Yet if “completely free of privacy and security concerns” is the standard, then any form of contact tracing is impossible. Traditional physical contact tracing involves public health officials interviewing infected patients and their recent contacts, collecting that information in centralized government databases, and connecting real identities to contacts. The Google-Apple exposure notification system clearly outperforms traditional approaches on privacy grounds. Soltani and his collaborators raise specious problems and offer no solution other than privacy fundamentalism.Skeptics of the Apple-Google exposure notification system point to a recent poll by the Washington Post that found “nearly 3 in 5 Americans say they are either unable or unwilling to use the infection-alert system.” About 20% of Americans don’t own a smartphone, and of those who do, around 50% said they definitely or probably would not use the system. While it’s too early to know how much each component of coronavirus response contributes to suppression, evidence from Singapore and South Korea suggests that technology can augment the traditional public health toolbox (even with low adoption rates). In addition, there are other surveys with contradictory results. According to a survey by Harris Poll, “71% of Americans would be willing to share their own mobile location data with authorities to receive alerts about their potential exposure to the virus.” Notably, cell phone location data is much more sensitive than the encrypted Bluetooth tokens in the Apple-Google exposure notification system.Any reasonable assessment of the tradeoff between privacy and effectiveness for contact tracing apps will conclude that if the apps are at all effective, they are overwhelmingly beneficial. For cost-benefit analysis of regulations, the Environmental Protection Agency has established a benchmark of about $9.5 million per life saved (other government agencies use similar values). By comparison, the value of privacy varies depending on context, but the range is orders of magnitude lower than the value of saving a life, according to a literature review by Will Rinehart.If we have any privacy-related criticism of the tech companies’ exposure notification API, it is that it requires the user to opt in by downloading a public health contact tracing app before it starts exchanging keys with other users. This is a mistake for two reasons. First, it signals that there is a privacy cost to the mere exchange of keys, which there is not. Even the wildest scenarios concocted by security researchers entail privacy risks from the API only when a user declares herself COVID-positive. Second, it means that the value of the entire contact tracing system is dependent on uptake of the app at all points in time. If the keys were exchanged all along, then even gradual uptake of the app would unlock value in the network that had built up even before users installed the app.The exposure notification API is part of a portfolio of responses to the pandemicSoltani, Calo, and Bergstrom raise other problems with contact tracing apps. They will result in false positives (notifications about exposures that didn’t result in transmission of the disease) and false negatives (failures to notify about exposure because not everyone has a phone or will install the app). If poorly designed (without verification from the public health authority), apps could allow individuals who are not COVID-positive to “cry wolf” and frighten a bunch of innocent people, a practice known in the security community as “griefing.” They want their readers to understand that the rollout of a contact tracing app using this API will not magically solve the coronavirus crisis.Well, no shit. No one is claiming that these apps are a panacea. Rather, the apps are part of a portfolio of responses that can together reduce the spread of COVID and potentially avoid the need for rolling lockdowns until a cure or vaccine is found (think of how many more false negatives there would be in a world without any contact tracing apps). We will still need to wear masks, supplement phone-based tracing methods with traditional contact tracing, and continue some level of distancing until the virus is brought fully under control. (For a point-by-point rebuttal of the Brookings article, see here from Joshua B. Miller).The exposure notification API developed by Google and Apple is a genuine achievement: it will enable the most privacy-respecting approach to contact tracing in history. It was developed astonishing quickly at a time when the world is in desperate need of additional tools to address a rapidly spreading disease. The engineers at Google and Apple who developed this API deserve our applause, not armchair second-guessing from unpleasable privacy activists.Under ordinary circumstances, we might have the luxury of interminable debates as developers and engineers tweaked the system to respond to every objection. However, in a pandemic, the tradeoff between speed and perfection shifts radically. In a viral video in March, Dr. Michael J. Ryan, the executive director of the WHO Health Emergencies Programme, was asked what he’s learned from previous epidemics and he left no doubt with his answer:
We're back! It's been a while since the last podcast, for obvious reasons, but today we've got a new episode following up on something we discussed with Mike Godwin in January: the Internet Society's proposed sale of the .org domain registry. That deal has since been cancelled, and some groups including the EFF assert that it showed ISOC can't be trusted to handle the registry, so this week Godwin joins us again to discuss what happened in more detail.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
The excellent podcast Radiolab has been running some shorter (from its normal fare) "dispatches" from the pandemic that have been quite interesting, but I wanted to take a quick look at one recent such episode that is mostly a discussion between host Jad Abumrad and ER doctor Avir Mitra, who, in a prior life, had interned at Radiolab, in which Mitra plays some of the voice memos he's been recording for himself as he deals with being an ER doctor on the frontlines in a hospital in NYC, where the largest number of COVID-19 cases are happening.The whole episode is quite interesting, and they get into discussions about how doctors are recognizing that COVID-19 is not acting like other respiratory diseases, and they're finding all sorts of oddities -- like patients who should be passed out due to low blood oxygen levels acting like there's nothing wrong at all:
It should be no secret at all that the world is a different place than it was just a few months ago, thanks to the novel coronavirus and the disease it causes, COVID-19. We've been doing our best to deal with these trying times, as I hope you are as well. One thing we've noticed over the last few months is the role of technology in these crazy times, leading myself to often wonder what this kind of crisis would have looked like if even only a decade ago. As we were seeing more and more stories highlighting the amazing ways in which technology has been a huge (sometimes literal) lifesaver, we thought it would be worth launching a new "edition" on our site, focused on the role technology has played during this pandemic.We're not even entirely sure what sorts of stories we'll see in this section, but the intersection of the pandemic and the technology world is something that is worth exploring. Some of the stories out there about tech and COVID may be more obvious than others (really, how many stories can there be about how much Zoom everyone is using?), but we're going to try to dig a bit deeper, and explore the perhaps more unexpected ways in which technology is playing a role in our everyday lives under lockdown, as well as how technology is changing how businesses operate, and (perhaps most importantly) the role of technology in response to the pandemic itself (mitigating, treating, and -- most hopefully -- curing the disease).This, like so much of what we do, is an experiment and we're excited to see where it goes. The posts will appear right here on Techdirt, or you can check them out directly (as they are posted) in a new dedicated tab up top.We're excited that the Charles Koch Institute has agreed to be our launch sponsor for this new section of the site. As its Executive Director, Derek Johnson said: "We are excited to continue our support of Techdirt, especially during this unique moment when we're likely to see significant creative destruction and experimentation with new business models in digital media. Innovation has been a force for good throughout human history, a trend especially evident today. Telling the story of how American individuals and institutions are leveraging digital tools during the coronavirus pandemic will reinforce that, as a society, we remain open to exploring creative applications of technology and ingenuity."That perfectly sums up our general viewpoint on the importance of innovation in so many different aspects of our life -- and we expect it to be an educational journey to explore exactly how that innovative spirit plays out in helping get us through a massive pandemic.
One of the most frustrating aspects of discussing the internet, business models, and privacy is how many otherwise intelligent people continue to insist that Google and Facebook are "selling your data." It's a concept that is widely considered accurate, but has never been true. It's so ridiculous that it leads to silly Congressional exchanges between elected officials who are sure the tech companies are selling data, and the people from those companies themselves. Doing targeted advertising is not selling data. There are many, many things you can reasonably and accurately complain about regarding big internet companies and their use of data, but "selling" the data is not one of them.As a refresher: the way targeted advertising works is that an advertiser agrees to place an ad and uses whatever system to target those ads to particular groupings of people, as set up by the ad platform. So, if you want to advertise to grumpy bloggers in their mid-40s, you can find a way to have those ads show to that demographic. But the advertiser doesn't get any data from the platform about anyone. The companies are selling access to highly targeted demographics, but it's never been selling data.That doesn't mean there aren't other companies that do sell private data. There are. Lots of them. Data brokers, telcos, some ISPs, and even your local DMV have been caught selling your actual data. But for some reason, everyone wants to keep insisting that Google and Facebook also sell data, when they never have, and have always only sold targeted advertising in which the data only goes in one direction, and not back to the advertiser.Now, that's all background to the very interesting news that the NY Times is now moving away from using 3rd party advertising services.
Big Think Edge is an unparalleled library of video lessons created by educators and taught by world-class experts like Malcolm Gladwell and Arianna Huffington. Learn the most important skills of the 21st century—like emotional intelligence, problem-solving, and critical thinking—to fuel your personal and professional growth from the top experts in the field. Big Think Edge releases 3 new exclusive lessons each week — that's 12 new, actionable life and career lessons every month. There are multiple subscriptions for sale: 1 year for $30, 2 years for $50, 3 years for $70, and unlimited for $160.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.