Learn about popular liquors and up your mixology game with the 2021 Ultimate Mixology And Cocktail Bundle. The 5 courses cover gin, tequila, whiskey, rum, and vodka. In each course you will not only learn about the background and history of 20 of the most popular cocktails made with that particular liquor, but also you will know the techniques that will enable you to mix truly world-class versions of each one. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Lots of attention has been paid to the mess down in Australia with its news link tax "bargaining code", and Facebook's response to it, including the eventual caving. So now both Google and Facebook have effectively agree to pay a tax to Australia's largest media companies... for daring to send them free traffic. It's the most bizarre thing in the world. Imagine if every time TV stations ran an advertisement, they also had to pay the advertiser. That's what this is.However, we should focus in a bit on Microsoft's role in all of this. First, before Google agreed to its deal and was threatening to shut down news links in Australia, Microsoft stepped in and said it would gladly support the law. This was so transparently greedy of the company. Basically, Microsoft has realized that it's failure to actually be able to compete in the marketplace means that it wants to support this kind of law knowing that one of two things will happen: (1) Google will bail out of a market, leaving it open to Microsoft or (2) it'll just cost its competitor Google a lot of money.The fact that it also fucks with the basic concept of the open web and not having to pay to link doesn't seem to enter into Microsoft's calculus at all. This takes Microsoft back to the shameful era in which it paid some godawful amount of money to political trickster Mark Penn not to help Microsoft better compete, but to simply attack Google like a political candidate. This is classic political entrepreneurship rather than market entrepreneurship. It's a sign of failure, when you're not trying to actually innovate, but simply abusing the political process to hamstring competitors.But, in this case, it's even worse, because it's not just Google and Facebook that get screwed, but the entire concept of the open web.And it gets worse. Microsoft seems so positively giddy about how this all worked out in Australia, that it's now taken the campaign global. Microsoft's President, Brad Smith wrote a blog post calling for this policy to be adopted elsewhere. Incredibly, Smith seems to argue that the attack on the Capitol might not have happened if Google and Facebook were taxed this way globally. The whole thing is just... so obnoxiously dishonest. It bemoans the loss of "professional journalism" and blames it all on social media.But that's garbage. Multiple studies have shown that Fox News was a bigger problem in spreading disinformation than social media. And, remember, that Fox News boss Rupert Murdoch is the main beneficiary of the Australian law. It's literally taking money from the less problematic spreader of disinfo and giving it to the more problematic one. But Smith/Microsoft act as if this is all for the good of society:
If you listen to Verizon marketing, it goes something like this: fifth generation (5G) wireless is going to absolutely transform the world by building the smart cities of tomorrow, revolutionizing medicine, and driving an ocean of innovation.In reality, US 5G has largely landed with a thud, studies showing how the US version is notably slower than overseas 5G (and in fact often slower than the 4G networks you're used to) because the US didn't do enough to drive middle-band spectrum to market. Contrary to Verizon claims it's not a technology that's likely to revolutionize medicine. Service availability also remains very spotty, and US consumers continue to pay some of the highest prices for mobile data in the developed world, regardless of standard.Some variations of the technology are also a bit of a battery hog, something Verizon support was begrudgingly forced to acknowledge this week by informing users that if they want better battery life, they're better off turning 5G off:
Here's one more horrifying postscript to the still-ongoing criminal prosecution(s) of Backpage's executives. Courts and attorneys general (including newly installed VP Kamala Harris) tried to run the company in on prostitution charges but often found their efforts rebuffed by courts who didn't see how hosting third-party ads was the same thing as aiding and abetting sex trafficking.Prosecutions abounded. So did a cottage industry of pearl clutchers and hand wringers -- many of which were holding powerful offices in Washington DC. These people were convinced the only way to fight sex trafficking was to punch holes in Section 230. Despite being warned against doing so by none other than the DOJ, they went ahead and passed FOSTA. This anti-sex trafficking law has been used exactly once in a criminal case since its inception.But here's the new thing, via Stephen Lemons writing for Front Page Confidential. The undercurrent of corruption behind the Backpage prosecutions continues to flow. It was never meant to be a fair fight. It was meant to make Backpage an example after other online services managed to shrug off misguided investigations and prosecutions attempting to turn hosts into criminal confederates.One of the goals of government work -- especially as it pertains to checks and balances -- is to avoid any appearances of impropriety. But in Arizona, appearances appear to be unimportant. Impropriety is in the eye of the beholder. And if the beholder wields less power, too fucking bad. Here's how things are being handled in the government's attempt to prosecute Michael Lacey and Jim Larkin of Backpage.Appearance? No, actual impropriety!
Here's one more horrifying postscript to the still-ongoing criminal prosecution(s) of Backpage's executives. Courts and attorneys general (including newly installed VP Kamala Harris) tried to run the company in on prostitution charges but often found their efforts rebuffed by courts who didn't see how hosting third-party ads was the same thing as aiding and abetting sex trafficking.Prosecutions abounded. So did a cottage industry of pearl clutchers and hand wringers -- many of which were holding powerful offices in Washington DC. These people were convinced the only way to fight sex trafficking was to punch holes in Section 230. Despite being warned against doing so by none other than the DOJ, they went ahead and passed FOSTA. This anti-sex trafficking law has been used exactly once in a criminal case since its inception.But here's the new thing, via Stephen Lemons writing for Front Page Confidential. The undercurrent of corruption behind the Backpage prosecutions continues to flow. It was never meant to be a fair fight. It was meant to make Backpage an example after other online services managed to shrug off misguided investigations and prosecutions attempting to turn hosts into criminal confederates.One of the goals of government work -- especially as it pertains to checks and balances -- is to avoid any appearances of impropriety. But in Arizona, appearances appear to be unimportant. Impropriety is in the eye of the beholder. And if the beholder wields less power, too fucking bad. Here's how things are being handled in the state's attempt to prosecute Michael Lacey and Jim Larkin of Backpage.Appearance? No, actual impropriety!
With much of the world in various states of lockdown, the videoconference meeting has become a routine part of many people's day, and a hated one. A fascinating paper by Jeremy Bailenson, director of Stanford University's Virtual Human Interaction Lab, suggests that there are specific problems with videoconference meetings that have led to what has been called "Zoom fatigue", although the issues are not limited to that platform. Bailenson believes this is caused by "nonverbal overload", present in at least four different forms. The first involves eye gaze at a close distance:
When we criticize Internet regulations like the CCPA and GDPR, or lament the attempts to roll back Section 230, one of the points we almost always raise is how unduly expensive these policy decisions can be for innovators. Any law that increases the risk of legal trouble increases the need for lawyers, whose services rarely come cheap.But bare cost is only part of the problem. All too often, policymakers seem to assume an infinite supply of capable legal counsel, and it's an assumption that needs to be questioned.First, there are not an infinite number of lawyers. For better or worse, the practice of law is a heavily regulated profession with significant barriers to entry. The legal industry can be fairly criticized, and often is, for making it more difficult and expensive to become a lawyer than perhaps it should be, but there is at least some basic threshold of training, competence, and moral character we should want all lawyers to have attained given the immense responsibility they are regularly entrusted with. These requirements will inevitably limit the overall lawyer population.(Of course, there shouldn't be an infinite number of lawyers anyway. As discussed below, lawyers play an important role in society, but theirs is not the only work that is valuable. In the field of technology law, for example, our need for people to build new things should well outpace our need for lawyers to defend what has been built. We should be wary of creating such a need for the latter that the legal profession siphons off too much of the talent able to do the former.)But even where we have lawyers we still need the right kind of lawyers. Lawyers are not really interchangeable. Different kinds of lawyering need different types of skills and subject-matter expertise, and lawyers will generally specialize, at least to some extent, in what they need to master for their particular practice area. For instance, a lawyer who does estate planning is not generally the one you'd want to defend you against a criminal charge, nor would one who does family law ordinarily be the one you'd want writing your employment manual. There are exceptions, but generally because that particular lawyer went out of their way to develop parallel expertise. The basic fact remains: simply picking any old lawyer out of the yellow pages is rarely likely to lead to good results; you want one experienced with dealing with the sorts of legal issues you actually have, substantively and practically.True, lawyers can retrain, and it is not uncommon for lawyers to switch their focus and develop new skills and expertise at some point in their careers. But it's a problem if a disproportionate number start to specialize in the same area because, just as we need people available to work professions other than law, even within the law we still need other kinds of lawyers available to work on other areas of law outside these particular specialized areas.And we also need to be able to afford them. We already have a serious "access to justice" problem, where only the most resourced are able to obtain legal help. A significant cause of this problem is the expense of law school, which makes it difficult for graduates to resist the siren call of more remunerative employment, but it's a situation that will only get worse if lawyer-intensive regulatory schemes end up creating undue demand for certain legal specializations. For example, as we increasingly pass a growing thicket of complex privacy regulations we create the need for more and more privacy lawyers to help innovators deal with these rules. But as the need for privacy lawyers outstrips the ready availability of lawyers with this expertise, it threatens to raise the costs for anyone needing any sort of lawyering at all. It's a basic issue of supply and demand: the more privacy lawyers that are needed, the more expensive it will be to attract them. And the more these lawyers are paid a premium to do this work, the more it will lure lawyers away from other areas that still need serving, thus making it all the more expensive to hire those who are left to help with it.Then there is the question of where lawyers even get the expertise they need to be effective counsel in the first place. The dirty little secret of legal education is that, at least until recently, it probably wasn't at their law schools. Instead lawyers have generally been trained up on the job, and what newbie lawyers ended up learning has historically depended on what sort of legal job it was (and how good a legal job it was). Recently, however, there has been the growing recognition that it really doesn't make sense to graduate lawyers unable to competently do the job they are about to be fully licensed to do, and one way law schools have responded is by investing in legal clinics.By and large, clinics are a good thing. They give students practical legal training by letting them basically do the job of a lawyer, with the benefit of supervision, as part of their legal education. In the process they acquire important skills and start to develop subject-matter expertise in the area the clinic focuses on, which can be in almost every practice area, including, as is relevant here, technology law. Meanwhile, clinics generally let students provide these legal services to clients far more affordably than clients would normally be able to obtain them, which partially helps address the access to justice problem.However, there are still some significant downsides to clinics, including the inescapable fact that it is students who are basically subsidizing the legal services they are providing by having to pay substantial amounts of money in tuition for the privilege of getting to do this work. A recurrent theme here is that law schools are notoriously expensive, often underwritten with loans, which means that students, instead of being paid for their work, are essentially financing the client's representation themselves.And that arrangement matters as policymakers remain inclined to impose regulations that increase the need for legal services without better considering how that need will be met. It has been too easy for too many to assume that these clinics will simply step in to fill the void, with an endless supply of students willing and able to pay to subsidize this system. Even if this supposition were true, it would still prompt the question of who these students are. The massive expense of law school is already shutting plenty of people out of the profession and robbing it of needed diversity by making it financially out of reach for too many, as well as making it impossible for those who do make it through to turn down more lucrative legal jobs upon graduation and take ones that would be more socially valuable instead. The last thing we need is a regulatory environment dependent on this teetering arrangement to perpetuate it.Yet that's the upshot of much of the policy lawmakers keep crafting. For instance, in the context of Section 1201 Rulemakings, it has been openly presumed that clinics would always be available to do the massive amount of work necessary to earn back for the public the right to do something it was already supposed to be legally allowed to do. But it's not just these cited examples of copyright or privacy law that are a problem; any time a statute or regulatory scheme establishes an unduly onerous compliance requirement, or reduces any of the immunities and safe harbors innovation has depended on, it puts a new strain on the legal profession, which now has to come up with the help from somewhere.At the same time, however, good policy doesn't mean necessarily eliminating the need for lawyers entirely, like the CASE Act tries to do. The bottom line is that legal services are not like other professional services. Lawyers play a critical role in upholding due process, and laws like the CASE Act that short-circuit those protections are a problem. But so are any laws that have the effect of interfering with that greater Constitutional purpose of the legal profession.For a society that claims to be devoted to the "rule of law," ensuring that the public can realistically obtain any of the legal help it needs should be a policy priority at least on par with anything else driving tech regulation. Lawmakers therefore need to take care in how they make policy to ensure they do not end up distorting the availability and affordability of legal services in the process. Such care requires (1) carefully calibrating the burden of any imposed policy to not unnecessarily drive up the need for lawyers, and (2) specifically asking the question: who will do the work. They cannot continue to simply leave "insert lawyers here" in their policy proposals and expect everything to be fine. If they don't also pointedly address exactly where it is these lawyers will come from then it won't be.
Last week, we hosted Section 230 Matters, a virtual Techdirt fundraiser featuring a panel discussion with the two lawmakers who wrote the all-important text and got it passed 25 years ago: Chris Cox and Senator Ron Wyden. It was informative and entertaining, and for this week's episode of the podcast, we've got the full audio of the panel discussion about the history, evolution, and present state of Section 230.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Tennessee is filled with awful legislators. Fortunately, despite itself, the legislature passed an anti-SLAPP law that appears to finally be putting an end to ridiculous libel lawsuits in the state. Prior to this, residents and libel tourists were abusing the law to do things like silence legitimate criticism and -- believe it or not -- sue a journalist for things said by someone he interviewed.While the state legislature continues pissing tax dollars away asking the federal government to institute criminal penalties for flag burning and requesting state colleges forbid student-athletes from expressing anything other than reverence for the flag, state courts are quietly ensuring their better legislative efforts remain viable.A short ruling [PDF] issued by a Tennessee circuit court says the state's anti-SLAPP law is not only constitutional, but serves a valuable purpose. (via Courthouse News)The plaintiff -- Tiny House Chattanooga -- sued Sinclair Broadcasting after news coverage of the fallout from a reality program episode involving the tiny house manufacturer resulted in some acrimonious behavior by both parties: Mike Bedsole of Tiny House and the nominal recipients of his tiny house, Rebecah and Ben Richards. The couple was apparently promised a house -- and some TV coverage -- but received neither.
ICE has always had a casual relationship with the Fourth Amendment. Since it's in the business of tracking foreigners, it has apparently decided the rights traditionally extended to them haven't actually been extended to them.Anything not nailed down by precedential court decisions or federal legislation gets scooped up by ICE. This includes location data pulled from apps that would appear to be subject to Supreme Court precedent on location tracking. ICE routinely engages in warrantless device searches -- something its legal office has failed to credibly justify in light of the Riley decision. And the Fourth Amendment -- along with judicial oversight -- is swept away completely by ICE's practice of deploying pre-signed warrants to detain immigrants. The agency is also not above forging judges' signatures to send "dangerous" immigrants packing.The latest exposure of ICE's tactics shows it will gather everything and anything to hunt down people who, for the most part, are just trying to give their families a better shot at survival. Whatever can be had without a warrant will be had. That's the message being sent by ICE, and relayed to us by Drew Harwell of the Washington Post. (h/t Magenta Rocks)
The whole dynamic between Facebook and the Oversight Board has received lots of attention -- with many people insisting that the Board's lack of official power makes it effectively useless. The specifics, again, for most of you not deep in the weeds on this: Facebook has only agreed to be bound by the Oversight Board's decisions on a very narrow set of issues: if a specific piece of content was taken down and the Oversight Board says it should have been left up. Beyond that, the Oversight Board can make recommendations on policy issues, but the companies doesn't need to follow them. I think this is a legitimate criticism and concern, but it's also a case where if Facebook itself actually does follow through on the policy recommendations, and everybody involved acts as if the Board has real power... then the norms around it might mean that it does have that power (at least until there's a conflict, and you end up in the equivalent of a Constitutional crisis).And while there's been a tremendous amount of attention paid to the Oversight Board's first set of rulings, and to the fact that Facebook asked it to review the Trump suspension, last week something potentially much more important and interesting happened. With those initial rulings on the "up/down" question, the Oversight Board also suggested a pretty long list of policy recommendations for Facebook. Again, under the setup of the Board, Facebook only needed to consider these, but was not bound to enact them.Last week Facebook officially responded to those recommendations, saying that it had agreed to take action on 11 of the 17 recommendations, is assessing the feasibility on another five, and was taking no action on just one. The company summarized those decisions in that link above, and put out a much more detailed pdf exploring the recommendations and Facebook's response. It's actually interesting reading (or, at least for someone like me who likes to dig deep into the nuances of content moderation).Since I'm sure it's most people's first question: the one "no further action" was in response to a policy recommendation regarding COVID-19 misinformation. The Board had recommended that when a user posts information that disagrees with advice from health authorities, but where the "potential for physical harm is identified but is not imminent" that "Facebook should adopt a range of less intrusive measures." Basically, removing such information may not always make sense, especially if it's not clear that the information in disagreement with health authorities might not be actively harmful. As per usual, there's a lot of nuance here. As we discussed, early in the pandemic, the suggestions from "health authorities" later turned out to be inaccurate (like the WHO and CDC telling people not to wear masks in many cases). That makes relying on those health authorities as the be all, end all for content moderation for disinformation inherently difficult.The Oversight Board's response in this issue more or less tried to walk that line, recognizing that health authorities' advice may adapt over time as more information becomes clear, and automatically silencing those who push back on the official suggestions from health officials may lead to over-blocking. But, obviously, this is a hellishly nuanced and complex topic. Part of the issue is that -- especially in a rapidly changing situation, where our knowledge base starts out with little information and is constantly correcting -- it's difficult to tell who is pushing back against official advice for good reasons or for conspiracy theory nonsense reasons (and there's a very wide spectrum between those two things). That creates (yet again) an impossible situation. The Oversight Board was suggesting that Facebook should be at least somewhat more forgiving in such situations, as long as they don't see any "imminent" harm from those disagreeing with health officials.Facebook's response isn't so much pushing back against the Board's recommendation -- but rather to argue that it already takes a "less intrusive" approach. It also argued that Facebook and the Oversight Board basically disagree on the definition of "imminent danger" from bad medical advice (the specific issue came up in the context of someone in France recommending hydroxychloroquine as a treatment for COVID). Facebook said that, contrary to the Board's finding, it did think this represented imminent danger:
Let's be clear about something: the net neutrality fight has always really been about monopolization and a lack of broadband competition. Net neutrality violations, whether it's wireless carriers blocking competing mobile payment services or an ISP blocking competing voice services, are just symptoms of a lack of competition. If we had meaningful competition in broadband, we wouldn't need net neutrality rules because consumers would vote with their wallets and leave an ISP that behaved like an asshole.But American broadband is dominated by just a handful of very politically powerful telecom giants fused to our national security infrastructure. Because of this, lawmakers and regulators routinely don't try very hard to fix the problem lest they upset a trusted partner of the FBI/NSA/CIA, or lose out on campaign contributions. As a result, US broadband is heavily monopolized, and in turn, mediocre in nearly every major metric that matters. US ISPs routinely, repeatedly engage in dodgy behavior that sees zero real penalty from our utterly captured regulators.The net neutrality fight has always really been a proxy fight about whether we want functional government oversight of these monopolies. The monopolies, it should be said, would prefer it if there were absolutely none. It's why for the last 20 years or so they've been on a relentless tear to strip away all state and federal regulatory oversight of their broken business sector, culminating in 2018's repeal of net neutrality -- which not only (and this part is important) killed net neutrality rules, but gutted the FCC's consumer protection authority (right before a pandemic, as it turned out).The repeal even attempted to ban states from being able to protect consumers from things like billing fraud, an effort the courts haven't looked kindly upon so far. But again, the goal here is clear: zero meaningful oversight of telecom monopolies.So with that as background, imagine my surprise when New York Times columnist Shira Ovide, whose tech coverage is usually quite insightful, informed the paper's 7.5 million subscribers that this entire several decade quest to thwart corruption and monopolization is "pointless":
It's pretty clearly established you have the right to record public servants as they perform their public duties. There are a few exceptions, but for the most part, if you're not interfering with their work, record away. Public servants hate this, of course, but there's not much they can do about it. Sure, they can try to use local laws to shut down recordings, but those efforts have routinely been rejected by federal courts.Enter the TSA and some agents who felt they shouldn't be recorded doing their work. The TSA may believe it's doing valuable national security work that can't be recorded by third parties, but it's actually doing nothing of the sort. There's nothing inherently secret about a pat down in the screening area, which is something that happens all the time and often can be observed by everyone else in the area.The TSA agents in this case [PDF] felt they had a right to not be recorded. That's not actually a thing, as the court reminds them. (via the Volokh Conspiracy)The plaintiff, Dustin Dyer, and his children cleared initial screening. Dyer's husband did not. TSA agents began their pat down of Dyer's husband and Dyer began his recording of them. He stood ten feet away recording the pat down. He did not interfere with the screening. Despite this, TSA agent Natalie Staton told Dyer his recording was "impeding" the agent performing the pat down. Dyer refused to stop recording so Agent Staton went and got her supervisor, Shirrellia Smith.Smith also told Dyer he could not record the pat down. Agent Staton then asked her supervisor to "order" Dyer to delete his recording. Which he did.
In early February, we discussed an extremely dumb lawsuit brought by a theme park in Utah called Evermore against Taylor Swift, who recently released an album called Evermore. The whole thing is buckets of stupid, with the Evermore theme park claiming that because it released a couple of songs on Apple Music, this somehow puts them in the same marketplace as Taylor Swift. Then there were complaints that Swift's album pushed search results down for the theme park, which doesn't trademark infringement make.Swift's response dismantled the claims the theme park made, but when on to note that Evermore theme park had actually gone on social media and responded to messages about Swift's album trying to associate the park with the album. In other words, the only potential for public confusion appears to have been generated by the theme park itself.And now this is going to escalate further as Swift's management company has countersued the park for the unauthorized use of Swift's music.
As Mike has explained, just about every provision of the social media moderation bill being proposed in the Utah legislature violates the First Amendment by conditioning platforms' editorial discretion over what appears on its services—discretion that the First Amendment protects—on meeting a bunch of extra requirements Utah has decided to impose. This post is about how everything Utah proposes is also barred by Section 230, and why it matters.It may seem like a fool's errand to talk about how Section 230 prohibits state efforts to regulate Internet platforms while the statute currently finds itself on life support, with fading vital signs as legislators on both sides of the aisle keep taking aim at it. After all, if it goes away, then it won't matter how it blocks this sort of state legislation. But that it currently does preclude what we're seeing out of Utah it is why it would be bad if Section 230 went away and we lost it as a defense against this sort of speech-chilling, Internet-killing regulatory nonsense from state governments. To see why, let's talk about how and why Section 230 currently forbids what Utah is trying to do.We often point out in our advocacy that Congress wanted to accomplish two things with Section 230: encourage the most good content online, and the least bad. We don't even need to speak to the law's authors to know that's what the law was intended to do; we can see that's what it was for with the preamble text in subsections (a) and (b), as well as the operative language of subsection (c) providing platforms protection for the steps they take to vindicate these goals, making it safe for them to leave content up as well as safe for them to take content down.It all boils down to Congress basically saying to platforms, "When it comes to moderation, go ahead and do what you need to do; we've got you covered, because giving you the statutory protection to make these Constitutionally-protected choices is what will best lead to the Internet we want." The Utah bill, however, tries to directly mess with that arrangement. While Congress wanted to leave platforms free to do the best they could on the moderation front by making it legally possible, as a practical matter, for them to do it however they chose, Utah does not want platforms to have that freedom. It wants to force platforms to moderate the way Utah has decided they should moderate. None of what the Utah bill demands is incidental nor benign; even the requirements for transparency and notice impinge on platforms' ability to exercise editorial and associative discretion over what user expression they facilitate by imposing significant burdens on the exercise of that discretion. Doing so however runs headlong into the main substance of Section 230, which specifically sought to alleviate platforms of burdens that would affect their ability to moderate content.It also contravenes the part of the statute that expressly prevented states from interfering with what Congress was trying to accomplish with this law. The pre-emption provision can be found at subsection (e)(3): "No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section." Even where Utah's law does not literally countermand Section 230's statutory language, what Utah proposes to do is nevertheless entirely inconsistent with it. While Congress essentially said with Section 230, "You are free to moderate however you see fit," Utah is trying to say, "No, you're not; you have to do it our way, and we'll punish you if you don't." Utah's demand is incompatible with Congress's policy and thus, per this pre-emption provision, not Constitutionally enforceable on this basis either.And for good reason. As a practical matter, both Congress and Utah can't speak on this issue and have it yield coherent policy that doesn't subordinate Congress's mission to get the best online ecosystem possible by letting platforms feel safe to do what they can to maximize the most good content and minimize the least bad. Every new threat of liability is a new pressure diverting platforms' efforts away from being good partners in meeting Congress's goal and instead towards doing only what is needed to avoid the trouble for themselves these new forms of liability threaten. There is no way to satisfy both regulators; Congress's plan to regulate platform moderation via carrots rather than sticks is inherently undermined once sticks start to be introduced. Which is part of the reason why Congress wrote in the pre-emption provision: to make sure that states couldn't introduce any.Section 230's drafters knew that if states could impose their own policy choices on Internet platforms there would be no limit to what sort of obligations they might try to dream up. They also knew that if states could each try to regulate Internet platforms it would lead to messy, if not completely irreconcilable, conflicts among states. That resulting confusion would smother the Internet Congress was trying to foster with Section 230 by making it impossible for Internet platforms to lawfully exist. Because even if Utah were right, and its policy happened to be Constitutional and not a terrible idea, if any state were free to impose a good policy on content moderation it would still leave any other state free to impose a bad one. Such a situation is untenable for a technology service that inherently crosses state boundaries because it means that any service provider would somehow have to obey both the good state laws and also the bad ones at the same time, even when they might be in opposition. Just think about the impossibility of trying to simultaneously satisfy, in today's political climate, what a Red State government might demand from an Internet platform and what a Blue State might. That readily foreseeable political catch-22 is exactly why Congress wrote Section 230 in such a way that no state government gets to demand appeasement when it comes to platform moderation practices.The only solution to the regulatory paralysis Congress rightly feared is what it originally devised: writing pre-emption into Section 230 to get the states out of the platform regulation business and leave it all instead to Congress. Thanks to that provision, the Internet should be safe from Utah's attack on platform moderation and any other such state proposals. But only so long as Section 230 remains in effect as-is. What Utah is trying to do should therefore stand as a warning to Congress to think very carefully before doing anything to reverse course and alter Section 230 in any way that would invite the policy gridlock it had the foresight to foreclose these twenty years ago with this prescient statute.
Pretextual stops are legal. The courts have said repeatedly that it's ok for cops to stop people for one thing to facilitate mini-investigations about other things. As long as the pretext holds up -- and reasonable suspicion about other things develops quickly enough -- cops can turn a failure to yield into a drug bust or a lucrative seizure.This is only one form of lying blessed by the courts. Cops can also lie to people they're questioning to drag confessions out of them. That some of these confessions are false or completely tainted by the cops' lying doesn't seem to matter much. Overturned convictions and wrongful arrest lawsuits haven't changed the criminal "justice" matrix much over the years. Cops can lie and courts will say it's ok, apparently operating under the assumption no innocent person would admit to a crime and those with nothing to hide have nothing to fear.But back to the pretext. Cops can initiate traffic stops to perform deeper investigations. But officers need to remember why they initiated the stop. And they need to provide the legal connective tissue between the initial stop and its eventual endpoint. In this case, the officers involved forgot what they were doing when they first started lying. (via FourthAmendment.com)And that's what costs them their arrest. This opinion [PDF] by the Oregon Court of Appeals draws some (more) lines in the criminal justice sand. It may only apply in this state, but it's still significant. All sorts of detentions have been treated as consensual encounters by courts, even when it seems clear no regular citizen would feel free to walk away from cops. But this one is different. The seizure -- at least under the state's constitution -- begins when officers make it clear any movement other than what was directed would be considered an attempt to flee and/or endanger officers.
You know what's always ripe for parody? Government agencies. You know who's often outlandishly upset about being parodied? Government officials.Back in 2016, Parma, Ohio resident Anthony Novak created a fake Parma Police Department page on Facebook. It should have been clear to everyone the page was a parody. The fake Parma PD page posted announcements about a roving police van offering free abortions to teenagers, a plan to criminalize helping the homeless, and the PD "strongly discouraging minorities" from applying for positions with the agency.Despite it being readily apparent this was not an official Parma PD page, Parma officers arrested Novak in March 2016. The page had only been live for 12 hours, but the PD claimed Novak's page "interrupted police operations." The Parma PD made the most of its apparently underutilized resources to stop this resident from making fun of it. To shut down a Facebook parody, the Parma PD deployed seven officers, three warrants, one subpoena, and hundreds of tax dollars to seize a bunch of electronic devices from Novak's house and throw him in jail. Novak spent four days in jail before being released and was ordered to report to a probation officer.Novak was acquitted of the felony "disruption of service" charge. His ensuing lawsuit made its way to the Sixth Circuit Court of Appeals which refused to grant qualified immunity to Parma PD officers. Unfortunately -- despite indicating it strongly felt the PD's actions violated Novak's First and Fourth Amendment rights -- it refused to make a call on either issue, sending it back to the district court for more fact-finding.Unfortunately, the lower court doesn't appear to have understood the message the Sixth Circuit sent. The Sixth Circuit said this looked like a pretty clear case of First Amendment retaliation, aided in part by a state law that appears to criminalize protected speech:
In the Project Management Fundamentals Course, you will be introduced to the world of project management and the different processes and methodologies that guide the industry. You will understand the different roles and responsibilities of the team members that make up a project. With 104 lectures, it will cover the fundamentals of being a project manager including the main approaches and principles. It will walk you through the proper stages of project management to deliver great results. It's on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
I've written in the past, many times, about how so many people keep wanting to blame social media companies, or intermediary liability laws, for what are only a manifestation of larger societal issues. Social media is only serving to make evident what was previously hidden. A few weeks ago, we quoted UK lawyer Heather Burns noting that intermediary liability laws were being expected to pick up the slack for a wide variety of other failures regarding mental health care, social safety nets, criminal and civil justice issues and more. Basically, a whole bunch of government failures were leading to problems in society, which were then being seen online. And rather than trying to fix the underlying causes of those, people were... blaming the internet. Burns later came on our podcast and we had a great detailed discussion about this issue.A few days later, I heard a fascinating interview on NPR's Fresh Air. The interview was with Rosa Brooks, a law professor and human rights activist, who joined the Washington DC police force as a reserve officer for a period of four years (for most of the Trump administration). The interview is really quite fascinating on a variety of levels, but one thing stood out to me -- that actually connects back to the point that Burns raised about how we're expecting the internet and intermediary liability laws to fill in for all the massive failures of society. To some extent, Brooks made the same point about the police: we've undermined so many other social safety nets, that we now expect the police to fill in for just about everything else.This isn't a new idea, of course. Tim Cushing has covered this point over and over again right here on Techdirt, including just recently, in writing about Denver's test to switch to sending out mental health professionals rather than police on distress calls that did not appear to involve criminal behavior, and how it had been a huge success. For many years, Tim has posted other similar stories, where it's just so dumb to send police to deal with a societal failing -- often in the mental health arena, but elsewhere as well.In the Brooks interview, she notes how silly it is to have armed cops handling traffic stops. So many needless police shootings involve traffic stops where the cops overreact and shoot someone they stopped for some minor infraction. We could easily separate out the roles, and make traffic enforcement done entirely differently, by traffic enforcers who are not police with guns, but have a more administrative role.And when you combine all of this, you realize that both of these threads really are about the same thing, from different angles. Society has failed to deal with mental health. It has failed to deal with extreme poverty. It has failed to deal with criminal justice and civil justice reform. And those are all creating messes. But rather than expect the government and public policy to actually clean up the messes -- we're dumping them on social media companies... and the police. And both are leading to disastrous outcomes.
Telecom giants like Comcast and AT&T have spent the last three or four years pushing (quite successfully) for massive deregulation of their own monopolies, while pushing for significant new regulation of the Silicon Valley giants, whose ad revenues they've coveted for decades. As such, it wasn't surprising to see AT&T come out with an incredibly dumb blog post last August throwing its full support behind Trump's legally dubious and hugely problematic executive order targeting social media giants and Section 230 of the Communications Decency Act, a law integral to protecting speech online.In it AT&T, a company that just got done having a ten year long toddler moment about how the FCC's attempt to apply some fairly modest oversight of telecom giants was "government run amok," pivots to support having the FCC regulate social media -- despite having no authority to actually do so. Again, AT&T isn't operating in good faith here; the company is simply looking to make life more difficult for Silicon Valley competitors whose ad revenues the telecom giant has always had a weird obsession with. Mike did a an excellent post breaking down the particulars of AT&T's inconsistent arguments.Granted, AT&T's dodgy arguments have been (not at all coincidentally) perfectly mirrored by the painfully inconsistent arguments of FCC Commissioners like Brendan Carr, who spent the entirety of of the Trump administration acting as an AT&T rubber stamp in human form. Not long ago, Carr tried to use AT&T's bad faith commentary to suggest there was some "growing consensus" that 230 needs to be "reformed" (read: dismantled for no coherent reason):
This week, both our winners on the insightful side come from our post about the latest story showing Facebook bent over backwards to have different rules for "conservative voices" on the platform, to avoid accusations of anti-conservative bias. The first place comment comes from bhull242, responding to the assertion that the terms "misinformation and hate" just mean something you disagree with:
So far, we've featured ~THE GREAT GATSBY~ and The Great Gatsby Tabletop Roleplaying Game in this series of posts about the winners of our public domain game jam, Gaming Like It's 1925. Today, we move on the pair of games that were tied as winners in the Best Remix category: Art Apart by Ryan Sullivan and There Are No Eyes Here by jukel.Both games were obvious contenders for the category, and ultimately it proved too difficult to choose one over the other, because they are so intriguingly similar yet completely different. Both could be described as "art puzzles", and both remix multiple public domain works, but neither clearly rises above the other.Art Apart is the more straightforward of the two: it's just a plain old jigsaw puzzle game using a series of paintings from 1925 and a fairly unpolished interface. But while this meant our judges didn't expect much from it at first glance, it proved to be a very pleasant surprise: carefully made, easy to use, employing a great selection of paintings complemented by public domain background music, all put together with an elegance that drew people in and had them solving entire puzzles when all they intended to do was poke around for a few minutes. In the process, they got to spend some time closely examining and appreciating five paintings that entered the public domain this year.There Are No Eyes Here is the more abstract of the two games, which is fitting since it focuses on a single artist: Wassily Kandinsky, the pioneer of abstract art. Kandinsky's works made an appearance in one of last year's winners, which explored a series of paintings he created in 1924, and this game picks up the following year with five Kandinsky paintings from 1925. While Art Apart is a traditional jigsaw puzzle, There Are No Eyes Here is about custom-made manipulations of its subject works: the player finds the elements of each painting that can be clicked to trigger animations in which Kandinsky's abstract shapes and forms begin shifting around, eventually unlocking the next painting in the series. Our judges noted that, to the artistically-inexperienced, the game was a perfect invitation to study this seminal artist's work with a level of attention to detail they might otherwise never have given it.So there you have it: two games, both remixing multiple paintings and turning them into puzzles, both doing it completely different ways. One more traditional, one more abstract, both successful at making the player take time to admire and enjoy some of the 1925 work that now belongs to us all — and both well deserving of the Best Remix award.Play Art Apart and There Are No Eyes Here in your browser on Itch, and check out the other jam entries too. Congratulations to both designers for the win! We'll be back next week with another game jam winner spotlight.
I've been somewhat amazed at the response to Facebook's decision in Australia to first block news links, in response to a dangerous new law, and then to cave in and cut deals with news organizations to pay for links. Most amazing to me is that otherwise reasonable people in Australia got very angry at me, insisting that I was misrepresenting the tax. They keep insisting it's not a tax, and that it's a "competition" response to "unfair bargaining power." Except, as I've discussed previously, there's nothing to bargain over when you should never have to pay for links. The links are free. There's no bargaining imbalance, because there's nothing to bargain over. And, it's clearly a tax if the only end result is that Google and Facebook have to fork over money because the government tells them to. That's... a tax.Anyway, that's why I'm happy to see The Juice Media, an Australian outfit that is famous for making hilarious "Honest Government Ads", usually for the Australian government (but sometimes for elsewhere) has put out a new "ad" about the link tax in which they explain how it was a fight to take money from one set of giant rich companies, and give it to another set of giant rich companies, and not to do anything useful in between:It's worth watching. It also highlights some of the other awful aspects of the "code" which will give news organizations more access to data, as well as advance notice of algorithmic changes that no one else gets -- allowing them to better hijack attention away from anyone else. The whole deal is dangerous and corrupt, and no one should be supporting it.
Summary: With the beginning of the COVID-19 pandemic, most of the large social media companies very quickly put in place policies to try to handle the flood of disinformation about the disease, responses, and treatments. How successful those new policies have been is subject to debate, but in at least one case, the effort to fact check and moderate COVID information ran into a conflict with people reporting on violent protests (totally unrelated to COVID) in Nigeria.In Nigeria, there’s a notorious division called the Special Anti-Robbery Squad, known as SARS in the country. For years there have been widespread reports of corruption and violence in the police unit, including stories of how it often robs people itself (despite its name). There have been reports about SARS activities for many years, but in the Fall of 2020 things came to a head as a video was released of SARS officers dragging two men out of a hotel in Lago and shooting one of them in the street.Protests erupted around Lagos in response to the video, and as the government and police sought to crack down on the protests, violence began, including reports of the police killing multiple protesters. The Nigerian government and military denied this, calling it “fake news.”Around this time, users on both Instagram and Facebook found that some of their own posts detailing the violence brought by law enforcement on the protesters were being labeled as “False Information” by Facebook’s fact checking system. In particular an image of the Nigerian flag, covered in blood of shot protesters, which had become a symbolic representation of the violence at the protests, was flagged as “false information” multiple times.Given the government’s own claims of violence against protesters being “fake news” many quickly assumed that the Nigerian government had convinced Facebook fact checkers that the reports of violence at the protests were, themselves, false information.However, the actual story turned out to be that Facebook’s policies to combat COVID-19 misinformation were the actual problem. At issue: the name of the police division, SARS, is the same as the more technical name of COVID-19: SARS-CoV-2 (itself short for: “severe acute respiratory syndrome coronavirus 2”). Many of the posts from protesters and their supporters in Lagos used the tag #EndSARS, talking about the police division, not the disease. And it appeared that the conflict between those two things, combined with some automated flagging, resulted in the Nigerian protest posts being mislabeled by Facebook’s fact checking system.Decisions to be made by Facebook:
As you'll recall, last summer there was a whole performative nonsense thing with then President Trump declaring TikTok to be a national security threat (just shortly after some kids on TikTok made him look silly by reserving a million tickets to a Trump rally they never intended to attend). Trump and his cronies insisted that TikTok owner ByteDance had to sell the US operations of TikTok to an American firm. The whole rationale about this was the claim -- unsupported by any direct evidence -- that TikTok was a privacy risk, because it was owned by a firm based in Beijing, and that firm likely had connections to the Chinese government (as do basically all large Chinese firms). But how was that privacy risk any worse than pretty much any other company? No one ever seemed to be able to say.Eventually, after Trump blocked both Microsoft and Walmart from doing the deal, he "approved" a non-sale, but "hosting" deal with Oracle, whose founder/chair, Larry Ellison, and CEO, Safra Catz, were both big Trump supporters. It quickly came out that TikTok's investors deliberately went hunting for a company that they knew Trump liked, and that's why they asked Oracle.But, part of the announcement of the "deal" was that Oracle would make sure that US TikTok users had their data protected, and that Oracle would keep that data outside the hands of the Chinese government. That seemed somewhat rich, considering that Oracle's initial rise to being a tech giant was built almost entirely on its close connections to the US government, and specifically the intelligence agencies. But it's become even more rich now that the Intercept reports that Oracle actually has a lucrative business helping repressive law enforcement in China do surveillance work. The long story is absolutely full of totally shocking -- but somehow not surprising -- details. It starts off by noting that Oracle hosted a presentation on its own website, literally describing how it helped police in Liaoning province better sort through all of the surveillance data they collected:
New rules for social media companies and other hosts of third-party content have just gone into effect in India. The proposed changes to India's 2018 Intermediary Guidelines are now live, allowing the government to insert itself into content moderation efforts and make demands of tech companies some simply won't be able to comply with.Now, under the threat of fines and jail time, platforms like Twitter (itself a recent combatant of the Indian government over its attempts to silence people protesting yet another bad law) can be held directly responsible for any "illegal" content it hosts, even as the government attempts to pay lip service to honoring long-standing intermediary protections that immunized them from the actions of their users.Here's a really bland and misleading summary of the new requirements from the Economic Times, India's most popular business newspaper:
If ever there were a stupid, unconstitutional notion that appears to be evergreen, it must certainly be attempts at outright banning games from the Grand Theft Auto series. While a certain segment of public officials have long sought to blame video games generally for all the world's ills, the GTA series has been something of a lightning rod for attempted censorship. Honestly, it's not totally impossible to understand why. The game is a violent, humorous parody of modern American life and pop culture, taken to such extremes so as to artistically point out the flaws in our society.You know... art.Art which is protected by the First Amendment and thus protected by attempts at government censorship. Which doesn't keep public officials from trying to ban it anyway. The most recent example of this is one Illinois lawmaker suggesting the entire state ban sales of the game because of an uptick in car-jackings in Chicago.
The Learn to Code 2021 Bundle has 13 courses to help you kickstart your coding career. Courses cover Ruby on Rails, C++, Python, C#, JavaScript, and more. You'll also learn about data science and machine learning. The bundle is on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
On the one hand, it's understandable that US phone companies companies don't want to maintain aging copper phone networks in the wake of sagging usage. On the other hand, traditional phone networks are very much still in use (especially among vulnerable elderly populations), many of these DSL lines remain the only option consumers can get thanks to spotty US broadband deployment, and much of the phone and DSL infrastructure was heavily subsidized by American taxpayers. Oh, and as Texas just realized, many of these older copper phone lines still work during disasters, when internet voice services don't.As such, there are numerous regulations that prevent these companies from just severing these lines completely. But US telcos, tired of traditional phone and residential broadband service, want to shift focus. So instead of a responsible transition plan (one that might mandate even coverage of wireless or fiber broadband upgrades they don't want to perform), many of these companies are simply letting the networks fall apart. And refusing to repair the lines when they fail. In large part because they know US state and federal regulators will (usually) be to chickenshit to actually do anything about it.In California, a report requested by the government found the same thing throughout the state. The April 2019 report, only just released after regional incumbents AT&T and Frontier tried to block it, found that as customer rates skyrocketed for both AT&T and Frontier, both companies increasingly cut back on infrastructure upgrades, repairs, and maintenance over the last decade. The report also found that AT&T has increasingly engaged in "redlining," or the act of failing to meaningfully upgrade lower income and minority communities at the same rate as more affluent neighborhoods:
So, here's where we're at in the Fifth Circuit: cops can literally set a person on fire and walk away from it.Judge Don Willett's incendiary comments opposing qualified immunity notwithstanding, civil rights litigation still remains a sucker bet in the Fifth Circuit, where cops are granted judicial forgiveness more frequently than they are in any other judicial circuit.Here's the latest depressing read from the Appeals Court, which can't talk itself into removing this shield from officers who tased a suicidal man after he covered himself in gasoline, turning a potential suicide into an actual homicide.Some cops seem to feel suicide threats should be converted into self-fulfilling prophecies. The cops involved here -- all Arlington, Texas police officers -- turned a distress call from a family member into the very thing the family members were hoping to prevent. From the opinion [PDF]:
For regular readers of Techdirt, Monster Energy is one of those companies that need only appear in the headline of a post before the reader knows that said post will be about some ridiculous trademark bullying Monster is doing. The company has a reputation for being about as belligerent on trademark matters as it could possibly be, lobbing lawsuits and trademark oppositions as though the company lawyers had literally nothing else to do with their time. And, while many, many, many of these bullying attempts fail when the merits are considered, the fact is that the bullying still often succeeds in its goal to use the massive Monster Energy coffers to bully victims into either submission or corporate death.The really frustrating part in all of this is how often Monster Energy attempts to trademark bully companies that aren't remotely competing in their market. One recent example of this is Monster going after MPT Autobody in South Carolina. For disclosure, one of the founders of MPT reached out to me personally to inform me of exactly what was going on. Based on our conversation and what I can see in public records, the order of events appears to go something like this:
Ben Smith has a fascinating piece in the New York Times about how independent investigative journalism is flourishing in Russia, despite an oppressive (and literally murderous) autocrat in power. There are a bunch of interesting points in the article about the various techniques they use -- some of which raise interesting ethical dilemmas -- but what caught my eye is just how vital it turns out the internet is to these organizations to be able to do what they do. Indeed, Smith points out that this is the flip side to the current moral panic in the US and elsewhere about "alternative media" and social media being the death of journalism:
Yet another report has shown that US consumers aren't getting the broadband speeds they're paying for.Researchers from broadband deal portal AllConnect dug through FCC data on broadband speeds and found that about 45 million Americans aren't getting the speeds that broadband providers are advertising. Fiber and cable broadband providers appeared to have the toughest time providing the speeds they advertise, with those subscribers getting around 55% of the speeds they were promised. Satellite and DSL providers generally offer crappy speeds, but at least, the report found, those speeds were delivered more consistently.The firm noted that consumers just aren't getting accurate data on what speed is available, or how much speed they'll get. Something that's kind of important during a pandemic in which broadband is key to education, employment, health care, human connection, and opportunity:
Having spent two and a half decades writing about innovation, one of the things that's most fascinating to me is how little most people can envision how innovation can have a positive effect on our lives. Perhaps it's a lack of imagination -- but, more likely, it's just human nature. Human psychology is wired for loss aversion, and it's much easier to understand all the ways in which technology and innovation can backfire to take away things we appreciate. History, however, tends to show that the positives of many innovations outweigh the negatives, but we're generally terrible at thinking through what those benefits might be.Part of the reason is just that it's impossible to predict the future. There are just too many variables, and too much randomness. But, part of it might also be our general unwillingness to even try to imagine positive futures. But imagining positive futures is one tool for actually getting us to move in that direction. Even by suggesting what interesting innovations and societal changes might happen can inspire individuals, organizations, institutions, and movements to try to make what was first imagined into reality. And we sure could use a bit of positive thinking these days. This is the story of how we attempted to help create more positive visions of the future -- specifically around artificial intelligence.As some of you may recall, a few years back, we did a fun project, called Working Futures to use a (more fun) type of scenario planning to explore possible futures for work -- and then turn those scenarios into entertaining science fiction. As many people know there are all sorts of concerns about what the future of work might look like. We're living in disruptive times when it comes to innovation, and in the last few decades, it's created a massive shift in the nature of employment, and there are many indications that this trend is accelerating. Historically, similar shifts in work due to technology have also been disruptive and frightening for many -- but all managed to be worked out in the end, despite fears of automation "destroying" jobs.However, simply saying that "it will work itself out" is incredibly unsatisfying and, even worse, provides little to no guidance for a variety of different stakeholders -- from actual workers to policy makers trying to put in place reasonable policies for a changing world. The Working Futures project was an attempt to deal with that challenge. We created a special scenario planning deck of cards, and ran a one-day session which helped us build a bunch of future scenarios. We gave those to science fiction writers, and eventually released an anthology of 14 speculative future stories about the future of work (which is rated quite highly in Amazon reviews and on Goodreads as well).Late last year, some people associated with the World Economic Forum and Berkeley's Center for Human-compatible AI (CHAI) reached out to us to say that they had been engaged in a similar -- but slightly different -- endeavor, and wondered if we might be able to lead a similar scenario planning process. The two organizations had already been working on a series of events to try to imagine specifically what a "positive future" for AI might look like. We all know the doom and gloom and dystopian scenarios. So this project was focused on something different: explicitly positive futures. The end goal was to take some of these positive AI future scenarios and use them as part of a film competition from the X-Prize Foundation (not unlike our Working Futures project, but with films instead of written fiction).They asked if we could take an approach similar to what we had done with Working Futures and run a workshop for around 90 attendees -- including some of the top economists, technologists, science fiction writers, and academics on this subject in the world -- and... they said they'd already invited people for the event just two weeks later.It turned what would normally have been a quiet time in December into a frantic mad dash as myself and Randy Lubin (our partner in our various gaming endeavors) had to put together a virtual event. We've obvious done scenario planning events -- including ones about the future of work. And even Working Futures was designed to be generally positive. But what WEF and CHAI were asking for was even more extreme, and required a real rethinking of how to put together a scenario planning program. Traditional scenario planning doesn't put any conditions on the potential scenario outputs -- so creating scenarios where the goal is for them to be explicitly positive presented a few challenges.Challenge 1: Directing scenario planning towards a desired style outcome is a pretty big departure from how you normally do scenario planning (starting with driving forces, and following those wherever they may lead). There are risks in doing this kind of scenario planning, because you don't want to preset the end state, or you lose the value of the open brainstorming and surprise discoveries of scenario planning.Challenge 2: Something we had discovered with Working Futures: explicitly "positive" futures sometimes feel... boring. They make for a tougher narrative, because good stories and good narrative usually involves conflict and tension and problems. That's much easier in a dystopian scenario than a utopian one. And if the end result of these scenarios is to drive useful story-telling, we had to consider how to create scenarios that were both interesting and "positive."Challenge 3:: With Working Futures, we did the scenario planning in a large room in San Francisco, and we had a custom card deck that we had made and printed, that everyone could use as part of the scenario planning process, to experiment with a variety of different forces. In this case, we had to manage to do the workshop via Zoom. This was a separate challenge for us in that while we've all done Zoom meetings (so, so, so many of them...) throughout the pandemic, for good scenario planning, you want to make use of smaller groupings, and we hadn't had as much experience with Zoom's "breakout room" feature. This presented a double challenge in itself. We had to create a series of exercises that people could follow -- meaning with enough scaffolding in the instructions that they could go off into groups and do the creative brainstorming, but without being able to easily see how they were all doing. And, we had to keep the whole thing interesting and exciting for a large group of very diverse people.In the end -- somehow -- we succeeded in overcoming all three challenges, and created a really amazing workshop. The feedback we got was astounding. The key ways that we worked to overcome the challenges and to create something useful was a realization that we'd start with a few more "broad" ideas to get people thinking generally about these kinds of distant future worlds, and with each exercise we'd focus more and more narrowly, building on the work in earlier exercises to help craft a variety of scenarios. The very first exercise was more of a warmup, but one that was still important to get creative juices flowing: figuring out new abundances and scarcities in such a world.To me, this was a key idea. When we think about big, disruptive changes brought on by technology, they often involve new "abundances." Cars make the ability to travel long distances "abundant." Computers make doing complex calculations abundant. The internet makes information abundant. Yet, the more interesting thing is how each new abundance... also creates new scarcities. For example, the abundance of information has created a scarcity in attention. As you think through new abundances, you can start to recognize possible scarcities, and it's almost always those new scarcities where you find interesting ideas about business models and jobs. So we had participants explore a few of those (here's one example that came out of the exercise):In the second exercise, we asked the breakout groups to build a "qualitative dashboard" to guide humanity in this new, positive future. We assumed that there would be a focus on optimizing certain aspects of life, and we asked the teams to develop a "dashboard" of qualitative concepts that should be optimized, and from there what quantifiable measures might be used to see if society was reaching those milestones. Here's one example:Of course, recognizing that whenever you try to "optimize" a particular value, it almost inevitably leads to unforeseen consequences (usually from focusing and optimizing too narrowly on a small number of quantitative values, and missing the bigger picture), we then had the teams present their dashboard to a different team, and had those other teams provide an analysis of what might go wrong with such a dashboard. How might optimizing on one of these items go badly awry.The third exercise was, in part, an attempt to deal with the problem of a utopian world being too boring. We had the teams focus on figuring out what was "the final hurdle" to reaching that "positive" future, and we used a tool that we've used a few times before: news headlines. We asked the breakout groups to effectively write a narrative in four headlines, starting with a negative headline demonstrating a major hurdle preventing society from reaching that positive future. Then the second headline would note some positive development that might, possibly, overcome the hurdle. The third headline was a setback: in the form of some kind of resistance effort that might block the hurdle from being cleared, followed finally by the last headline: a story that showed evidence that the hurdle truly had been overcome.From there, we started to really focus in. The headlines created a sense of this "world" that each group was inhabiting, but we wanted to look more closely at what kind of world that was. The fourth exercise explored what were the new essential institutions, participatory organizations, and social movements in this new world. The idea here was to think about what would life actually look like in this world. Would there still be "jobs" or would your daily activity look radically different?Again, mindful of both the potential "boringness" of utopia, as well as the fact that perfection is impossible, in the middle of this exercise we introduced something of a "shock" to the worlds that were being built -- telling participants that a major earthquake had struck, with millions of people wounded, possibly dead, trapped or missing -- and with major infrastructure disrupted. We asked the teams to go back and look at the institutions, organizations, and movements they'd just discussed to see how they reacted and how well they handled this shock (and if new such groups formed instead):The final exercise of the workshop was designed and run by WEF's Ruth Hickin, diving in even deeper, and asking participants to explore specific individuals within these scenarios. Each person was assigned a future persona, and had time to explore what that persona might think about this world -- and then had each of the participants take on that role, and have a discussion with the others in their group, in character, trying to answer some difficult questions about their obligations to society, and whether or not they could find meaning in this future world.While Randy and I planned the whole event in two frantic weeks, making it actually work required a bigger team of incredibly helpful people. Caroline Jeanmaire at CHAI and Conor Sanchez at WEF helped organize everything, and gave us great feedback and guidance throughout the project design. The two of them also helped keep the event running smoothly, following a very detailed run of show to make sure breakouts happened in a timely and clean manner, and that we could re-assemble everyone for the group discussions in between each of the breakouts. They also brought together a group of facilitators who helped guide each of the breakout discussions and keep everyone on track.Another incredibly handy tool that made this all work was Google Slides. Between breakouts, we'd assemble everyone and discuss the next exercise, including an example slide. Then each breakout room had their own set of "template" slides that had the instructions (in case they hadn't quite followed them when we explained them), an "example" slide to get inspiration from, and then the template slides for them to fill out. It turned out that this had multiple useful features. Perhaps most importantly, it helped us record the brainstorming in a way that would live on and which could be used later for the X-Prize competition. But it also allowed us to "peek in" on the various tables while the exercises were happening, without having to jump into their Zoom breakouts, disturbing everyone. As the breakouts were happening, I would flip between the different group slides, and see if any of the groups appeared to be struggling or confused, and then we could send someone into that breakout to assist.In the end, as noted, the event turned out to be a striking success. We've received great feedback on it, and are exploring possibly running it again (perhaps in a modified version) in the future. Randy also wrote up a post describing some more of the nuts and bolts of the workshop if you want more details about how we pulled off the whole thing.From my end, the biggest takeaway was that a well-crafted event could create truly brilliant and inspiring ideas for what the future might look like, and it was somewhat humbling to see how our framing and scaffolding was embraced by such an eclectic and diverse group to generate such fascinating futuristic worlds and scenarios. Hopefully, some of these futures will inspire not just films or stories about this future, but will also inspire people to work to make something like those futures a reality.
Is it too late to force Tennessee to secede from the Union and become some sort of free-floating non-nation we can freely raid to shore up our non-wartime stockpiles of tobacco and country music?To be fair, I'll list Tennessee's positives first. Within the last year, a court struck down a law that forbade the use of entertaining hyperbole by political candidates, and legislators finally passed an anti-SLAPP law with teeth -- the latter of which should head off bullshit like someone suing a reporter for things someone else said.On the other hand, legislators continue to ignore its position as a backwater state in terms of internet access. And legislators are still doing extremely stupid things, like asking federal legislators to bypass the First Amendment and Supreme Court precedent to jail people for burning the flag.Here's the latest broadside against constitutional rights and common sense, via pretty much every member of Tennessee's Republican leadership. Let's go direct to the source of this hideousness, who provides the question this legislative bullshit begs:
The Complete Logo Design In Photoshop Course is an easy and fun, step-by-step, practical approach to Photoshop. You will learn how to create a professional-looking logo using text and images. You'll also learn how to organize your workspace layouts, adjust blending modes on layers, mask easily, alight text, use clone tool, and more. It's on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
I have a few different services that report to me if my email is found in various data breaches, and recently I was notified that multiple email addresses of mine showed up in a leak of the service NetGalley. NetGalley, if you don't know, is a DRM service for books, that is regularly used by authors and publishers to send out "advance reader" copies (known around the publishing industry as "galleys.") The service has always been ridiculously pointless and silly. It's a complete overreaction to the "risk" of digital copies of a book getting loose -- especially from the people who are being sent advance reader copies (generally journalists or industry professionals). I can't recall ever actually creating an account on the service (and can't find any emails indicating that I had -- but apparently I must have). However, in searching through old emails, I do see that various publishers would send me advance copies via NetGalley -- though I don't think I ever read any through the service (the one time I can see that I wanted to read such a book, after getting sent a NetGalley link, I told the author that it was too much trouble and they sent me a PDF instead, telling me not to tell the publisher who insisted on using NetGalley).It appears that NetGalley announced the data breach back in December on Christmas Eve, meaning it's likely that lots of people missed it. Also, even though I'm told through this monitoring service that my email was included, NetGalley never notified me that my information was included in the breach. NetGalley did say that the breach included both login names and passwords -- suggesting that they didn't even know to hash their passwords, which is just extremely incompetent in this day and age.So, from my side of things, this means that the company put me and my information at risk for what benefit? To make my life as a potential reviewer of a book more difficult and annoying, and limiting my ability to easily read a book? DRM benefits literally no one. And in this case, has now created an even bigger mess in leaking my emails and whatever passwords I used for their service (thankfully, I don't reuse passwords, or it could have been an even bigger problem). For those who say that the DRM is still necessary to avoid piracy, that's ridiculous as well. If the book is going to get copied and leaked online, it's going to get copied and leaked online. And once one copy is out, all the DRM in the world is meaningless.Rather than focusing so much on locking stuff up and making it impossible to read, while putting people's personal info at risk, just stop freaking out, recognize that most people are not out to get you by putting your stuff on file sharing sites, and focus on getting people to want to buy your books, rather than putting their data and privacy at risk.
Last November, Comcast quietly announced that the company would be expanding its bullshit broadband caps into the Northeast, one of the last Comcast territories where the restrictions hadn't been imposed yet. Of course Comcast was utterly tone deaf to the fact there was a historic health and economic crisis going on, or how imposing unnecessary surcharges on consumers already struggling to make rent wasn't a great look. In some states, like Massachusetts, lawmakers stood up to the regional monopoly, going so far as to push a law that would have banned usage caps during the pandemic.After gaining some bad press for the behavior, Comcast initially delayed the efforts a few months, hoping that would appease folks. When it didn't, Comcast last week announced that it would be suspending the caps until 2022. This, according to Comcast, was to give consumers "more time to become familiar" with the restrictions:
The Supreme Court has done a lot over the years to shield law enforcement officers from accountability. It has redefined the contours of the qualified immunity defense to make it all but impossible for plaintiffs to succeed. Appeals Courts have been hamstrung by Supreme Court precedent, forced to pretty much ignore the egregious rights violations in front of them in favor of dusting off old decisions to see if any officer violated someone's rights in exactly this way prior to this case.Since law enforcement officers are apparently unable to exercise judgment on their own, the courts often grant forgiveness to these poor single-cell organisms who couldn't have possibly known that, say, locking a prisoner in a feces-covered cell for days violated the prisoner's rights. And that's the conclusion the Fifth Circuit Appeals Court reached December 2019 in Taylor v. Riojas.The Fifth Circuit is the worst circuit to bring a federal civil rights violation case. And it's still as awful as ever, even with Judge Don Willett -- who published a scathing dissent in another qualified immunity case -- sitting on the bench.The only good news is that the Supreme Court may be slowly realizing its expansion of the qualified immunity defense is encouraging courts to give law enforcement officers a pass even when it's painfully clear rights have been violated. Almost a year after the Fifth Circuit ruled in favor of prison guards, the Supreme Court reversed this decision. There may have been no case exactly on point, but for the Supreme Court that's not a necessity when there's a clear rights violation.
More bad news for Stadia. We were just discussing Google's decision to axe its own game development studios. In and of itself, such a move to cut staff like this would be a worrying sign for the platform, especially given just how much growing interest there has been in video games and game-streaming surrounding the COVID-19 pandemic. But when it's instead one more indication that Google isn't fully committed to its own platform, alongside the poor reception from the public and concerns about whether it can deliver the gaming experience it promised, these things tend to pile up on one another. I have attempted to drive home the point of just how important the development of trust with customers is for Stadia, given that those buying into the platform are gaming entirely at the pleasure of Google's desire to keep Stadia going.And the hits to trust keep coming. In direct fallout from its decision to cut the development teams, Stadia customers are finding themselves unable to get support for Google's internally developed game.
Summary: Chatroulette rose to fame shortly after its creation in late 2009. The platform offered a new take on video chat, pairing users with other random users with each spin of the virtual wheel.The novelty of the experience soon wore off when it became apparent Chatroulette was host to a large assortment of pranksters and exhibitionists. Users hoping to luck into some scintillating video chat were instead greeted with exposed penises and other body parts. (But mostly penises.)This especially unsavory aspect of the service was assumed to be its legacy -- one that would see it resigned to the junkheap of failed social platforms. Chatroulette attempted to handle its content problem by giving users the power to flag other users and deployed a rudimentary AI to block possibly-offensive users.The site soldiered on, partially supported by a premium service that paired users with other users in their area or who shared the same interests. Then something unexpected happened that drove a whole new set of users to Chatroulette: the COVID-19 pandemic. More people than ever were trapped at home and starved for human interaction. Very few of those were hoping to see an assortment of penises.Faced with an influx of users and content to moderate, Chatroulette brought in AI moderation specialist Hive, the same company that currently moderates content on Reddit. With Chatroulette experiencing a resurgence, the company is hoping a system capable of processing millions of frames of chat video will keep its channels clear of unwanted content.Decisions to be made by Chatroulette:
We recently announced the winners of our third annual public domain game jam, Gaming Like It's 1925. Now, just like last year, we're dedicating an episode of the podcast to looking at each of the winners a bit closer. Mike is joined by Randy Lubin (our partner in running the jams) and myself (with some unfortunate audio issues that I apologize for), to talk about all these great games that bring 1925 works into the present day.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
A bunch of New York City law enforcement unions have been suing to block the side effects of the repeal of 50-a, an ordinance passed in 1976 that exempted police departments and other agencies (like fire departments) from disclosing information about misconduct to the public.For more than 40 years, the bad law remained in place. It took nationwide anger of the killing of another black man by a white cop to get it taken off the books. In response, a bunch of unions presiding over the New York City's police and fire departments lawyered up, hoping to continue withholding this information.The legal battle has reached the Second Circuit Court of Appeals. And the Appeals Court doesn't find the plaintiffs' assertions about "irreparable harm" credible. The unions claim the repeal of 50-a (and the consequent release of disciplinary records) violates agreements they have with the city -- one that says findings in favor of officers/employees will be removed from employees' disciplinary records.The Appeals Court [PDF] points out that the unions can't just decide the public employees they represent don't have to follow the law.
You'll recall that after the Trump FCC effectively neutered itself at telecom lobbyist behest in 2017, numerous states jumped in to fill the consumer protection void. Most notable among them being California, which in 2018 passed some net neutrality rules that largely mirrored the FCC's discarded consumer protections. Laughing at the concept of state rights, Bill Barr's DOJ immediately got to work protecting U.S. telecom monopolies and filed suit in a bid to vacate the rules, claiming they were "radical" and "illegal" (they were neither).And while the broadband industry had a great run during the Trump era nabbing billions in tax breaks and regulatory handouts, that era appears to be at an end.Earlier this month the Biden DOJ dropped its lawsuit against California, leaving the industry to stand alone. Now a Judge has refused the broadband industry's request for an injunction, allowing California to finally enforce its shiny new law. Worse (for the broadband sector), Mendez also made it very clear that while the case isn't over yet, the broadband industry isn't likely to win. He was also less than impressed by the broadband industry's claim that because the broadband industry has tried to behave as it awaits a legal outcome, that this somehow meant net neutrality rules weren't necessary:
From mobile and video game building to monetization, master the best animation development practices across 146 Hours of training on Unity and Blender. The 2021 Premium Unity Game Developer Bundle teaches you how to code by having you make your own versions of popular games. You'll also learn about cross-platform development, how to give your materials advanced effects, how to make a game that utilizes AI, and much more. It's on sale for $45.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
It has become an article of faith among some that the big social media sites engage in "anti-conservative bias" in their moderation practices. When we point out, over and over again, that there is no evidence to support these claims, our comments normally fill up with very, very angry people calling us "delusional" and saying things like "just look around!" But they never actually provide any evidence. Because it doesn't seem to exist. Instead, what multiple looks at the issue have found is that moderation policies might ban racists, trolls, and bigots, and unless your argument is that "conservatism" is the same thing as "racism, trolling, and bigotry" then you don't have much of an argument. In fact, studies seem to show that Facebook, in particular, has bent over backwards to support conservative voices on the platform.Last fall a report came out noting that when an algorithmic change was proposed to downgrade news on Facebook overall, the fact that some extremist far right sites were so popular on the site, the company's leadership, including Mark Zuckerberg were so afraid that Republicans would accuse them of "anti-conservative bias" that he stepped in to make sure the algorithm also downgraded some prominent "left-leaning" sites, even though the algorithm initially wasn't going to -- just so they could claim that both sides of the traditional political spectrum were downgraded.Over the weekend a new report came out along similar lines, noting that Facebook's policy team spent a lot of time and effort putting in place a policy to deal with "misinformation and hate." Not surprisingly, this disproportionately impacted far right extremists. While there certainly is misinformation across the political spectrum -- especially at the outer reaches of the traditional political compass, it's only on the right that it has generally gone mainstream. And, again, the same political calculus appeared to come into play. After the policy team worked out more neutral rules for dealing with misinformation and hate, Zuckerberg apparently stepped in to overrule the policy, and to make sure that wack job supporters of Alex Jones and similar conspiracy mongers were allowed to continue spewing misinformation:
Economists repeatedly warned that the biggest downside of the $26 billion Sprint T-Mobile merger was the fact that the deal would dramatically reduce overall competition in the U.S. wireless space by eliminating Sprint. Data from around the globe clearly shows that the elimination of one of just four major competitors sooner or later results in layoffs and higher prices due to less competition. It's not debatable. Given U.S. consumers already pay some of the highest prices for mobile data in the developed world, most objective experts recommended that the deal be blocked.It wasn't. Instead, the Trump FCC rubber stamped the deal before even seeing impact studies. And the DOJ not only ignored the recommendations of its staff, but former Trump DOJ "antitrust" boss Makan Delrahim personally helped guide the deal's approval process via personal phone and email accounts. Both agencies, and the vocal chorus of telecom-linked industry allies, all behaved as if all of this was perfectly legitimate and not grotesquely corrupt.At the heart of the DOJ's approval was a flimsy proposal that involved giving Dish Network some T-Mobile spectrum in the hopes that, over seven years, they'd be able to build out a replacement fourth carrier. As we noted at the time there was very little chance this plan was ever going to work. And there's been several hints that we're already stumbling along this doomed trajectory.One, Dish and its former CEO Charlie Ergen have a long history of empty promises in wireless. He'd been accused (including by T-Mobile previously) of simply hoarding valuable spectrum and stringing along feckless, captured regulators for years with an eye on cashing out once the spectrum's value had appreciated. Two, AT&T, Verizon, and T-Mobile are all heavily incentivized to make sure this proposal never got off the ground. Three, federal regulators are generally afraid to stand up to industry on a single issue of substance, and aren't likely to engage in the kind of hard-nosed nannying required to usher Dish's plan from pipe dream to major network.Not too surprisingly, it doesn't sound like the relationship between Dish and T-Mobile is going particularly well as the company bleeds wireless subscribers (it lost 363,000 last quarter alone). A cornerstone of getting Dish up and running as a viable replacement fourth carrier involved the company leaning heavily on T-Mobile's existing 3G network. But that network is being shuttered as of the beginning of next year, Dish said in a filing:
Last October, Senators Ron Wyden and Elizabeth Warren asked the IRS's oversight to take a look at the agency's use of third-party data brokers to obtain cell site location info harvested from phone apps. This new collection of location data appeared to bypass the Supreme Court's Carpenter decision, which said cell site location info was protected by the Fourth Amendment.This means warrants were needed to obtain this information from cell service providers. Multiple government agencies -- including the CBP, DEA, and Defense Department -- appear to believe approaching data brokers a couple of steps removed from the location data collection process aren't affected by this warrant requirement. While both cell site location info from cell providers and bulk data from brokers can accomplish the same long-term tracking of individuals, the latter tends to be less detailed since it sometimes requires apps to be in use to produce location data, rather than just connected to a cell tower.The IRS may have believed no warrants are needed to buy bulk data from brokers, but its oversight disagrees. The Treasury Department Inspector General says the 2018 Supreme Court ruling may cover this data as well.
It's no secret that in the year and a half since Google launched its video game streaming platform, Stadia, things haven't gone particularly well. Game developers were wary at the onset that Google, as it has with projects like this in the past, might simply one day shut the whole thing down if it thinks the venture is a loser. The launch of Stadia itself was mostly met with meager interest, due to scant games available on the platform. Even then, the rollout was a mix of chaos and glitch, critiques of its promise for true 4k game streaming, very low adoption rates, and some at the company appearing to want to go to war with game-streamers.And now, there are signals that the trouble is worsening. Google recently announced, completely without warning, that it was shuttering its in-house Stadia game development studio.
People have been very angry at me for pointing out that Facebook's decision to ban links to news down under actually made sense -- even though Facebook has now cut a deal to return the links. The move was in response to an incredibly poorly thought out law to force Facebook and Google to pay giant news organizations, just because those news organizations couldn't figure out how to innovate online. One key point: I said that even if Facebook is the worst representative of the "open web," this move is the right one for the open web. That's because the alternative is much worse. Since the Australian law would force Google and Facebook to pay for the crime of linking to news, it would set up the incredibly anti-open web concept that you could be forced to pay to link.Again, as we've already explained, this is idiotic. The links give websites free web traffic. Most news organizations, including those down in Australia, employ SEO and social media managers to try to get more links and more traffic from these websites because the links themselves are valuable. And thus, this entire bill is bizarre. It's saying that not only do you have to give us valuable traffic for free... you also have to pay us. I still can't think of any reasonable analog, the situation is so insane.But -- some people argue back -- Facebook is no champion of the open web. Indeed. I've never argued otherwise. It's not. But this move was important to protect the open internet (and it's now disappointing that the company has caved). But, of course, this move also has demonstrated why Facebook has, historically, been a danger to the open web as well. And that's because when it blocked access to news links in Australia, it also did the same for many Pacific islands. And while we've mocked Australians who don't seem to realize they can just go to the websites of news organizations, for some of these Pacific islands, that's not actually the case. Because of Facebook's other attacks on the open web.For years, we've pointed out the evil that is Facebook's "Free Basics" program. This is a form of "zero rating," in which Facebook would subsidize (or even make free) access in remote parts of the world... but only to Facebook. Facebook, of course, framed this as a way of "connecting the poor" and helping to get affordable internet access to places that didn't have it. But that's not true. It only gave them access to Facebook. As many people have pointed out over the years, if Facebook really wanted to subsidize internet access in these parts of the world, it should have have subsidized real access to the wider internet, not just Facebook.So, now, these two things have collided in the South Pacific. Facebook's anti-open internet policies with zero rating, and Facebook's pro-open internet decision to not link if it requires payment. And those who bought into the false prophet of Free Basics, are now suffering: