President Trump openly admires authoritarians. It appears he believes he was being elected dictator rather than president, and has been openly bitter about his perceived lack of power ever since. The world leaders he enjoys talking to most -- Vladimir Putin, Mohammad bin Salman, Recep Erdogan -- are all notorious thugs who punish critics, dissidents, and anyone else who steps a little out of line.Trump envies that power. He spends most of his phone time trying to impress a collection of international asshats. And he embarrasses himself (and us by proxy) when speaking about his favorite shitheels in public. Just recently, Trump spent part of his meeting with an American pastor recently freed from a Turkish prison praising the man who had put him there.
The NYPD barely bothers to punish officers who misbehave. This "misbehavior" often includes violations of rights and extrajudicial killings, but it appears the NYPD feels New York's "finest" should be above reproach. Consequently, NYPD internal investigations often conclude no officers should be reproached, allowing them to remain the "finest" even when they're really the worst.A new wrinkle in the law fabric might change that. After years of doing nothing (and after years of the NYPD never bothering to invoke the law), the state repealed "50a," the statute that allowed the NYPD to withhold misconduct records from the public. For several years, the NYPD posted the outcome of internal investigations. Then it decided it was no longer going to do that. First, it blamed the high cost of printer ink. Then it cited the law that allowed it to stop posting reports where the press could access them.Lawsuits followed. And -- as is the case whenever law enforcement opacity is threatened -- the NYPD's unions have intervened. It was too little too late. An injunction was sought and obtained, but ProPublica -- which wasn't a party to the lawsuit over 50a records -- published what it had already received from the NYPD. But the battle continues because future opacity is at stake. Unfortunately, a federal court has decided opacity must win out for the moment.
In January of this year, we discussed how the Illinois Comptroller had decided to opt out of collecting red light camera fees for motorists ticketed by these automated revenue generators. Susan Mendoza said in a statement that while her office was taking this action due to the feds investigating the contractor for the cameras, a company called SafeSpeed, it was also her position that red light cameras were revenue generators with little efficacy at impacting public safety.All very true... but about that federal investigation.
The Trump Administration's decision to send federal agents -- led by the DHS -- to Portland, Oregon to handle civil unrest (prompted by yet another killing of an unarmed Black man by a white police officer) continues to generate litigation.Supposedly sent to protect federal buildings targeted by Portland protesters, the DHS task force -- composed of CBP, ICE, and FPS officers -- rolled into Portland Gestapo-style, sending out unidentified officers to toss people into unmarked vehicles, spiriting them away to undisclosed locations to be subjected to detainment and interrogations that were never documented.The DHS task force redefined riot police to include rioting federal police. Officers attacked press and legal observers with the same enthusiasm they attacked protesters with. Local journalists sued, obtaining a restraining order against federal agents… one the federal agents immediately violated.Another lawsuit has been filed, this one accusing the DHS task force of violating the rights of protesters. The ACLU -- along with a number of other plaintiffs (including the "Black Millennial Movement") claims federal officers are deploying excessive force and engaging in unlawful detainments of participants in the ongoing Portland protests.The complaint [PDF] opens up with a nice little dig at the Administration's unwillingness to properly staff its departments, reminding the court (and readers) the DHS still doesn't have a legally appointed director.
A version of this post appeared on Project Disco: What the Bostock Decision Teaches About Section 230.Earlier this summer, in Bostock v. Clayton County, Ga. the Supreme Court voted 6-3 in favor of an interpretation of Title VII of the Civil Rights Act that bars discrimination against LGBT people. The result is significant, but what is also significant – and relevant for this discussion here – is the analysis the court used to get there.What six justices ultimately signed onto was a decision that made clear that when a statute is interpreted, that interpretation needs to be predicated on what the statutory language actually says, not what courts might think it should say.
On Tuesday morning a story began making the rounds indicating that Russian hackers had somehow managed to hack into Michigan's election systems, gaining access to a treasure trove of voter data. Russian newspaper Kommersant was quick to proclaim that nearly every voter in Michigan -- and a number of voters in additional states -- had had their personal information compromised. The report was quickly parroted by other outlets including the Riga-based online newspaper Meduza, which insisted that the breach was simply massive:
The Prestige Adobe Suite UI/UX Bundle will help you expand your design skills with over 100 hours of content on essential Adobe Suite programs. Courses cover Adobe XD, Photoshop, After Effects, Premiere Pro, HTML5, and CSS3. You'll learn how to build professional responsive websites, how to edit videos, how to animate your UI design, and much more. The bundle is on sale for $50.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Earlier this year, regulators in Australia announced plans to tax Google and Facebook for sending traffic to news organizations, and then pay those news organization. The draft law literally names Google and Facebook and says that this law only impacts those two companies. The whole thing is bizarre. There are no pretenses here. It's just that old line media companies (many owned by Rupert Murdoch) are jealous of the success of Google and Facebook online, and seem to think they're magically owed money. And that's what the tax would do. It would force Google and Facebook to pay money for the awful crime of sending traffic to news sites without paying them.Nevermind that if they didn't want this traffic they could use robots.txt to block it. Nevermind that companies (including many of these media companies) hire SEO and social media experts to try to get more traffic. These companies feel so entitled to money that they feel that Facebook and Google need to pay them for sending traffic, just because.And Australian regulators seem to think this is a grand idea.A few weeks back Google posted an open letter to Australians noting that this would do a lot more harm than good, and other parts of the draft law would damage the quality of Google's search results (among other things, the law wouldn't let Google make changes to its search algorithms without giving media companies a 4-week notice, which is insane, given that Google tweaks its algorithm multiple times a day).Now Facebook has gone even further, and outright said that if this becomes law, it will no longer allow publishers to share news on its platform in Australia. This is the nuclear option -- similar to what Google did in Spain six years ago when Spain passed a similar law. In that case, Google waited until after the law went into effect to make the announcement and pull the plug.In this case, Facebook is firing a warning shot by saying that's exactly what it will do if this draft bill becomes law:
It wasn't supposed to go this way.AT&T purchased DirecTV in 2015 for $67.1 billion (including debt). The company then gobbled up Time Warner in 2018 for a cool $86 billion. Together, these deals were supposed to cement AT&T as a dominant player in the video advertising wars to come. Instead, they created a convoluted mess that resulted in a mass exodus of pay TV subscribers. In fact, a combination of bungled integration, massive debt, price hikes, and confusing branding have resulted in AT&T losing 7 million subscribers since 2018. That's obviously not the kind of M&A fueled sector domination AT&T executives like Randall Stephenson (since "retired") envisioned.Now AT&T is reportedly trying to offload DirecTV entirely:
One of the more frustrating aspects of the ongoing COVID-19 pandemic has been the frankly haphazard manner in which too many folks are tossing around ideas for bringing it all under control without fully thinking things through. I'm as guilty of this as anyone, desperate as I am for life to return to normal. "Give me the option to get a vaccine candidate even though it's in phase 3 trials," I have found myself saying more than once, each time immediately realizing how stupid and selfish it would be to not let the scientific community do its work and do it right. Challenge trials, some people say, should be considered. There's a reason we don't do that, actually.And contact tracing. While contact tracing can be a key part of siloing the spread of a virus as infectious as COVID-19, how we contact trace is immensely important. Like many problems we encounter these days, there is this sense that we should just throw technology at the problem. We can contract trace through our connected phones, after all. Except there are privacy concerns. We can use dedicated apps on our phones for this as well, except this is all happening so fast that it's a damn-near certainty that there are going to be mistakes made in those apps.This is what Albion College in Michigan found out recently. Albion told students two weeks prior to on-campus classes resuming that they would be required to use Aura, a contact tracing app. The app collects a ton of real-time and personal data on students in order to pull off the tracing.
The Ninth Circuit Appeals Court has just stripped away the protections granted to journalists and legal observers covering ongoing protests in Portland, Oregon. After journalists secured an agreement from local police to stop assaulting journalists and make them exempt from dispersal orders, the DHS's ad hoc riot control force (composed of CBP, ICE, and Federal Protective Services) showed up and started tossing people into unmarked vans and assaulting pretty much everyone, no matter what credentials they displayed. Shortly after that, a federal court in Oregon granted a restraining order forbidding federal agents from attacking journalists and observers.Not that granting the restraining order did much to prevent federal officers from beating journalists with batons, spraying them with pepper spray, or making sure they weren't left out of any tear gassings. The plaintiffs were soon back in court seeking sanctions against federal violators of the order. The DHS said it couldn't identify any of the officers and stated it had punished no one for violating the order. This prompted the judge to add more stipulations to the order, including the wearing of identification numbers by officers engaging in riot control.Unfortunately for journalists and legal observers, the restraining order is no longer in place. It was rolled back by the Appeals Court in a very short order [PDF] with the court finding that a blanket order protecting journalists and observers from being assaulted makes things too tough for federal cops. (via Courthouse News)
One of the ideas that comes up a lot in proposals to change Section 230 is that Internet platforms should be required to produce transparency reports. The PACT Act, for instance, includes the requirement that they "[implement] a quarterly reporting requirement for online platforms that includes disaggregated statistics on content that has been removed, demonetized, or deprioritized." And the execrable NTIA FCC petition includes the demand that the FCC "[m]andate disclosure for internet transparency similar to that required of other internet companies, such as broadband service providers."
At Free Press, we work in coalition and on campaigns to reduce the proliferation of hate speech, harassment, and disinformation on the internet. It’s certainly not an easy or uncomplicated job. Yet this work is vital if we’re going to protect the democracy we have and also make it real for everyone — remedying the inequity and exclusion caused by systemic racism and other centuries-old harms seamlessly transplanted online today.Politicians across the political spectrum desperate to “do something” about the unchecked political and economic power of online platforms like Google and Facebook have taken aim at Section 230, passed in 1996 as part of the Communications Decency Act. Changing or even eliminating this landmark provision appeals to many Republicans and Democrats in DC right now, even if they hope for diametrically opposed outcomes.People on the left typically want internet platforms to bear more responsibility for dangerous third-party content and to take down more of it, while people on the right typically want platforms to take down less. Or at least less of what’s sometimes described as “conservative” viewpoints, which too often in the Trump era has been unvarnished white supremacy and unhinged conspiracy theories.Free Press certainly aligns with those who demand that platforms do more to combat hate and disinformation. Yet we know that keeping Section 230, rather than radically altering it, is the way to encourage that. That may sound counter-intuitive, but only because of the confused conversation about this law in recent years.Preserving Section 230 is key to preserving free expression on the internet, and to making it free for all, not just for the privileged. Section 230 lowers barriers for people to post their ideas online, but it also lowers barriers to the content moderation choices that platforms have the right to make.Changes to Section 230, if any, have to retain this balance and preserve the principle that interactive computer services are legally liable for their own bad acts but not for everything their users do in real time and at scale.Powerful Platforms Are Still Powering Hate, and Only Slowly Changing Their WaysOnline content platforms like Facebook, Twitter and YouTube are omnipresent. Their global power has resulted in privacy violations, facilitated civil rights abuses, provided white supremacists and other violent groups a place to organize, enabled foreign-election interference and the viral spread of disinformation, hate and harassment.In the last few months some of these platforms have begun to address their role in the proliferation and amplification of racism and bigotry. Twitter recently updated its policies by banning links on Twitter to hateful content that resides offsite. That resulted in the de-platforming of David Duke, who had systematically skirted Twitter’s rules by linking to hateful content across the internet while following some limits for what he said on Twitter itself.Reddit also updated its policies on hate and removed several subreddits. Facebook restricted “boogaloo” and QAnon groups. YouTube banned several white supremacists accounts. Yet despite these changes and our years of campaigning for these kinds of shifts, hate still thrives on these platforms and others.Some in Congress and on the campaign trail have proposed legislation to rein in these companies by changing Section 230, which shields platforms and other websites from legal liability for the material their users post online. That’s coming from those who want to see powerful social networks held more accountable for third-party content on their services, but also from those who want social networks to moderate less and be more “neutral.”Taking away Section 230 protections would alter the business models of not just big platforms but every site with user-generated material. And modifying or even getting rid of these protections would not solve the problems often cited by members of Congress who are rightly focused on racial justice and human rights. In fact, improper changes to the law would make these problems worse.That doesn’t make Section 230 sacrosanct, but the dance between the First Amendment, a platform’s typical immunity for publishing third-party speech, and that same platform’s full responsibility for its own actions, is a complex one. Any changes proposed to Section 230 should be made deliberately and delicately, recognizing that amendments can have consequences not only unintended by their proponents but harmful to their cause.Revisionist History on Section 230 Can’t Change the Law’s Origins or Its VitalityTo follow this dance it’s important to know exactly what Section 230 is and what it does.Written in the early web era in 1996, the first operative provision in Section 230 reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”When a book or a newspaper goes to print, its publisher is legally responsible for all the words printed. If those words are plagiarized, libelous, or unlawful then that publisher may face legal repercussions. In the terms of Section 230, they are the law’s “information content provider[s]”.Wiping away Section 230 could revert the legal landscape to the pre-1996 status quo. That’s not a good thing. At the time, a pair of legal decisions had put into a bind any “interactive computer service” that merely hosts or transmits content for others. One case held a web platform that did moderate content could be sued for libel (just as the original speaker or poster could be) if that alleged libel slipped by the platform’s moderators. The other case held sites that did not moderate were not exposed to such liability.Before Section 230 became law, this pair of decisions meant websites were incentivized to go in one of two directions: either don’t moderate at all, tolerating not just off-topic comments but all kinds of hate speech, defamation, and harassment on their sites; or vet every single post, leading inexorably to massive takedowns and removal of anything that might plausibly subject them to liability for statements made by their users.The authors of Section 230 wanted to encourage the owners of websites and other interactive computer services, to curate content on their websites as these sites themselves saw fit. But back then that meant those websites could be just as responsible as newspapers for anything anyone said on their platforms if they moderated at all.In that state of affairs, someone like Mark Zuckerberg or Jack Dorsey would have the legal responsibility to approve every single post made on their services. Alternatively, they would have needed to take a complete, hands-off approach. The overwhelming likelihood is that under a publisher-liability standard those sites would not exist at all, at least not in anything like their present form.There’s an awful lot we’re throwing out with the bathwater if we attack not just the abuses of ad-supported and privacy-invasive social-media giants but all sites that allow users to share content on platforms they don’t own. Smaller sites likely couldn’t make a go of it at all, even if a behemoth like Facebook or YouTube could attempt the monumental task of bracing for potential lawsuits over the thousands of posts made every second of the day by their billions of users. Only the most vetted, sanitized, and anodyne discussions could take place in whatever became of social media. Or, at the other extreme, social media would descend into an unfiltered and toxic cesspool of spam, fraudulent solicitations, porn, and hate.Section 230’s authors struck a balance for interactive computer services that carry other people’s speech: platforms should have very little liability for third-party content, except when it violates federal criminal law and intellectual property law.As a result, websites of all sizes exist across the internet. A truly countless number of these — like Techdirt itself — have comments or content created by someone other than the owner of the website. The law preserved the ability of those websites, regardless of their size, to tend to their own gardens and set standards for the kinds of discourse they allow on their property without having to vet and vouch for every single comment.That was the promise of Section 230, and it’s one worth keeping today: an online environment where different platforms would try to attract different audiences with varying content moderation schemes that favored different kinds of discussions.But we must acknowledge where the bargain has failed too. Section 230 is necessary but not sufficient to make competing sites and viewpoints viable online. We also need open internet protections, privacy laws, antitrust enforcement, new models for funding quality journalism in the online ecosystem, and lots more.Taking Section 230 off the books isn’t a panacea or a pathway to all of those laudable ends. Just the opposite, in fact.We Can’t Use Torts or Criminal Law to Curb Conduct That Isn’t Tortious or CriminalHate and unlawful activity still flourish online. A platform like Facebook hasn’t done enough yet, in response to activist pressure or advertiser boycotts, to further modify its policies or consistently enforce existing terms of service that ban such hateful content.There are real harms that lawmakers and advocates see when it comes to these issues. It’s not just an academic question around liability for carrying third-party content. It’s a life and death issue when the information in question incites violence, facilitates oppression, excludes people from opportunities, threatens the integrity of our democracy and elections, or threatens our health in a country dealing so poorly with a pandemic.Should online platforms be able to plead Section 230 if they host fraudulent advertising or revenge porn? Should they avoid responsibility for facilitating either online or real-world harassment campaigns? Or use 230 to shield themselves from responsibility for their own conduct, products, or speech?Those are all fair questions, and at Free Press we’re listening to thoughtful proposed remedies. For instance, Professor Spencer Overton has argued forcefully that Section 230 does not exempt social-media platforms from civil rights laws, for targeted ads that violate voting rights and perpetuate discrimination.Sens. John Thune and Brian Schatz have steered away from a takedown regime like the automated one that applies to copyright disputes online, and towards a more deliberative process that could make platforms remove content once they get a court order directing them to do so. This would make platforms more like distributors than publishers, like a bookstore that’s not liable for what it sells until it gets formal notice to remove offending content.However, not all amendments proposed or passed in recent times have been so thoughtful, in our view, Changes to 230 must take the possibility of unintended consequences and overreach into account, no matter how surgical proponents of the change may think an amendment would be. Recent legislation shows the need for clearly articulated guardrails. In an understandable attempt to cut down on sex trafficking, a law commonly known as FOSTA (the “Fight Online Sex Trafficking Act”) changed Section 230 to make websites liable under state criminal law for the knowing “promotion or facilitation of prostitution.”FOSTA and the state laws it ties into did not precisely define what those terms meant, nor set the level of culpability for sites that unknowingly or negligently host such content. As a result, sites used by sex workers to share information about clients or even used for discussions about LGBTQIA+ topics having nothing to do with solicitation were shuttered.So FOSTA chilled lawful speech, but also made sex workers less safe and the industry less accountable, harming some of the people the law’s authors fervently hoped to protect. This was the judgment of advocacy groups like the ACLU that opposed FOSTA all along, but also academics who support changes to Section 230 yet concluded FOSTA’s final product was “confusing” and not “executed artfully.”That kind of confusion and poor execution is possible even when some of the targeted conduct and content is clearly unlawful. But, rewriting Section 230 to facilitate the take-down of hate speech that is not currently unlawful would be even trickier and fundamentally incoherent. Saying platforms ought to be liable for speech and conduct that would not expose the original speaker to liability would have a chilling impact, and likely still wouldn’t lead to sites making consistent choices about what to take down.The Section 230 debate ought to be about when it’s appropriate or beneficial to impose legal liability on parties hosting the speech of others. Perhaps this larger debate on the legal limits of speech should be broader. But that has to happen honestly and on its own terms, not get shoehorned into the 230 debate.Section 230 Lets Platforms Choose To Take Down HatePlatforms still aren’t doing enough to stop hate, but what they are doing is in large part thanks to having 230 in place.The second operative provision in the statute is what Donald Trump, several Republicans in Congress, and at least one Republican FCC commissioner are targeting right now. It says “interactive computer services” can “in good faith” take down content not only if it is harassing, obscene or violent, but even if it is “otherwise objectionable” and “constitutionally protected.”That’s what much hate speech is, at least under current law. And platforms can take it down thanks not only to the platforms’ own constitutionally protected rights to curate, but because Section 230 lets them moderate without exposing themselves to publisher liability as the pre-1996 cases suggested.That gives platforms a freer hand to moderate their services. It lets Free Press and its partners demand that platforms enforce their own rules against the dissemination of hateful or otherwise objectionable content that isn’t unlawful, but without tempting platforms to block a broader swath of political speech and dissent up front.Tackling the spread of online hate will require a more flexible multi-pronged approach that includes the policies recommended by Change the Terms, campaigns like Stop Hate for Profit, and other initiatives. Platforms implementing clearer policies, enforcing them equitably, enhancing transparency, and regularly auditing recommendation algorithms are among these much-needed changes.But changing Section 230 alone won’t answer every question about hate speech, let alone about online business models that suck up personal information to feed algorithms, ads, and attention. We need to change those through privacy legislation. We need to fund new business models too, and we need to facilitate competition between platforms on open broadband networks.We need to make huge corporations more accountable by limiting their acquisition of new firms, changing stock voting rules so people like Mark Zuckerberg aren't the sole emperors over these vastly powerful companies, and giving shareholders and workers more rights to ensure that companies are operated not just to maximize revenue but in socially responsible ways as well.Preserving not just the spirit but the basic structure of Section 230 isn’t an impediment to that effort, it’s a key part of it.Gaurav Laroia and Carmen Scurato are both Senior Policy Counsel at Free Press.
I regret to inform you that AT&T is at it again. For over a decade now, the company has had a weird infatuation with Google. It seems to truly hate Google and has long decided that anything bad for Google must be good for AT&T. Because Google was an early supporter of net neutrality -- a concept that AT&T (stupidly and incorrectly) seems to think is an existential threat to its own business plans of coming up with sneaky ways to spy on you and charge you more -- over a decade ago, AT&T started floating the lame idea that if it's to be held to "net neutrality" Google ought to be held to "search neutrality." Of course, there's a problem with that: there's no such thing as "search neutrality" because the whole point of search is to rank results for you. A "neutral" search would be a useless search that ranks nothing.However, now that the FCC (who knows better) caved in to the bumptious Trump demands to reinterpret Section 230 of the Communications Decency Act, AT&T stupidly (and self-destructively) has decided that it's going to comment against Section 230. AT&T top lobbyist Joan Marsh put up a truly spectacularly dumb blog post about how this is "the neutrality debate we need to have" (i.e., about Google and Facebook's treatment of content, rather than AT&T's treatment of network connections):
The Ultimate PMP, Six Sigma, and Minitab Bundle will help you hone your managerial and data analysis skills which are vital to effective project delivery. The courses cover Six Sigma white, yellow, green, and black belts, as well as graphical tools, control charts, and hypothesis testing in Minitab. There are also three courses on Lean project management. It's on sale for $50.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
There has been plenty of talk about how technology has impacted how we live during the pandemic, but it's interesting to see how that's impacting things beyond the most obvious -- including some interesting cultural changes. Over in the NY Times, reporter Taylor Lorenz, who always has her finger on the beat of the most interesting cultural changes due to technology, has an article about college collaboration houses. That is, because so many colleges are remaining in distance learning to start the school year, thanks to the ongoing pandemic, students are recognizing that just because they don't need to be on campus, it doesn't mean they need to stay at home either:
Last May, a largely overlooked report by OpenSignal detailed how, despite endless hype, U.S. 5G is notably slower than 5G in most other developed countries. Because U.S. regulators failed to make mid-band spectrum (which offers faster speeds at greater range) widely available, many U.S. wireless carriers like Verizon embraced higher millimeter wave spectrum (which has trouble with range and building wall penetration) or low-band spectrum (which offers greater range but at notably reduced speeds). The result of the study was fairly obvious:A new updated report by OpenSignal didn't have any better news. According to the wireless network analysis firm, average 5G download speeds in the US is somewhere around 50 Mbps. And while that's certainly nothing to sneeze at, it's a far cry from carrier hype proclaiming 5G is somehow utterly revolutionary, and it's far from the 200-400 Mbps speeds being seen in many other countries:
This week, both our winners on the insightful side are anonymous commenters on our post about dismantling the police. In first place, it's some thoughts on where to start:
Get your First & Fourth Emojiment gear in the Techdirt store on Threadless »Earlier this week, we added two of our popular old designs to our line of face masks in the Techdirt store on Threadless: the First and Fourth Amendments, translated into the language of emojis. Both are available as standard and premium masks and in youth sizes, plus all kinds of other gear: t-shirts, hoodies, phone cases, notebooks, buttons, and much more.And if you haven't in a while, check out the Techdirt store on Threadles to see the other designs we have available, including classic Techdirt logo gear and our most popular design, Nerd Harder. The profits from all our gear help us continue our reporting, and your support is greatly appreciated!
Mere days ago, we were talking about Activision's decision to do a delete and replace for the trailer for the latest Call of Duty game worldwide due to pressure from the Chinese government. That pressure came about over 1 second's worth of footage in the trailer that showed an image from pro-democracy protests in 1989. While only a trailer for an un-released game, the point I attempted to make is that this was a terrible precedent to set. It's one thing to sanitize games, a form of art, for distribution within China. We could spend hours arguing over just how willing companies should be in bowing to the thin-skin of the Chinese government when it comes to art in favor of making huge sums of money, but that's at least understandable. It makes far less sense to apply those changes to the larger world, where China's pearl-clutching sensibilities aren't a thing.And now we're seeing this continue to occur. Kotaku has a quick write up for several changes made to a handful of re-released retro games and this appears to be more of the same. We'll start with the re-release of Baseball Stars 2, a Neo Geo classic.
Summary:On March 15, 2019, the unimaginable happened. A Facebook user -- utilizing the platform's live-streaming option -- filmed himself shooting mosque attendees in Christchurch, New Zealand.By the end of the shooting, the shooter had killed 51 people and injured 49. Only the first shooting was live-streamed, but Facebook was unable to end the stream before it had been viewed by a few hundred users and shared by a few thousand more.The stream was removed by Facebook almost an hour after it appeared, thanks to user reports. The moderation team began working immediately to find and delete re-uploads by other users. Violent content is generally a clear violation of Facebook's terms of service, but context does matter. Not every video of violent content merits removal, but Facebook felt this one did.The delay in response was partly due to limitations in Facebook's automated moderation efforts. As Facebook admitted roughly a month after the shooting, the shooter's use of a head-mounted camera made it much more difficult for its AI to make a judgment call on the content of the footage.Facebook's efforts to keep this footage off the platform continue to this day. The footage has migrated to other platforms and file-sharing sites -- an inevitability in the digital age. Even with moderators knowing exactly what they're looking for, platform users are still finding ways to post the shooter's video to Facebook. Some of this is due to the sheer number of uploads moderators are dealing with. The Verge reported the video was re-uploaded 1.5 million times in the 48 hours following the shooting, with 1.2 million of those automatically blocked by moderation AI.Decisions to be made by Facebook:
If you live in a rural area, or have driven across the country anytime in the last five years, you probably already know the telecom industry's wireless coverage maps are misleading -- at best. In turn, the data they deliver to the FCC is also highly suspect. Regardless, this is the data being used when we shape policy and determine which areas get broadband subsidies, and, despite some notable progress in improving this data in recent years, it's still a major problem. Last year, for example, the Trump FCC quietly buried a report showing how major wireless carriers routinely overstate wireless voice and data availability.Facing massive political pressure from pissed off (and bipartisan) state lawmakers eager for a bigger slice of federal subsidies, the FCC has started taking the basic steps necessary to start to improve things. One of those improvements is a recent proposal (pdf) that would include requiring carriers actually drive around testing their network performance so they can provide more accurate, real-world data. This isn't a huge ask. But T-Mobile and AT&T are fighting back against the proposal, claiming it's "too expensive":
For decades, trust and safety professionals in content moderation, fraud and risk, and safety — have faced enormous challenges, often under intense scrutiny. In recent years, it’s become even more clear that the role of trust and safety professionals are both critically important and difficult. In 2020 alone, we’ve seen an increasing need for this growing class of professionals to combat a myriad of online abuse related to systemic racism, police violence, and COVID-19 — such as hate speech, misinformation, price gouging, and phishing — while keeping a safe space for connecting people with vital, authoritative information, and with each other.Despite the enormous impact trust and safety individuals have towards protecting the online and offline safety of people, the professional community has historically been dispersed, siloed, and informally organized. To date — unlike, say, in privacy — no organization has focused on the needs of trust and safety professionals in a way that builds a shared community of practice.This is why we founded the Trust & Safety Professional Association (TSPA) and the Trust & Safety Foundation Project (TSF) — something we think is long overdue. TSPA is a new, nonprofit, membership-based organization that will support the global community of professionals who develop and enforce principles and policies that define acceptable behavior online. TSF will focus on improving society’s understanding of trust and safety, including the operational practices used in content moderation, through educational programs and multidisciplinary research.Since we launched in June, we’ve gotten a number of questions about what TSPA and TSF will (and won’t) do. So we thought we’d tackle them right here, and share more with you about who’s included, why we launched now, and what our vision is for the future. You can also hear us talk more about both organizations on episode 247 of the Techdirt podcast. And if you want to know even more, we’re all ears!Q&AQ. How do you define trust and safety? Don’t you mean content moderation?We define trust and safety professionals as the global community of people who develop and enforce policies that define acceptable behavior online.Content moderation is a big part of trust and safety, and the area that gets the most public attention these days. But trust and safety also includes the people who tackle financial risk and fraud, those who process law enforcement requests, engineers who work on automating these policies, and more. TSPA is for the professionals who work in all of those areas.Q. What’s the difference between TSPA and TSF?TSPA is a 501(c)(6) membership-based organization for professionals who develop and enforce principles and policies that define acceptable behavior and content online. Think ABA for lawyers, or IAPP for privacy people, but for those working in trust and safety, who can use TSPA to connect with a network of peers, find resources for career development, and exchange best practices.TSF is a fiscally sponsored project of the Internet Education Foundation and focuses on research.The two organizations are complementary, but have distinct missions and serve different communities. TSPA is a membership organization, while TSF has a charitable purpose.Q. Why are you doing this now?We first started discussing the need for something like this more than two years ago, in the wake of the first Content Moderation at Scale (COMO) conference in Santa Clara. The conference was convened by one of TSPA’s founders and board members, Santa Clara University law professor Eric Goldman, which you can read about right here. After the first COMO get-together It was clear that there was a need for more community amongst people who do trust and safety work.Q. Are you taking positions on policy issues or lobbying?Nope. We’re not advocating for public policy positions on behalf of corporate supporters or anyone else. We do want to help people better understand trust and safety as a field, as well as shed light on the challenges that trust and safety professionals face.Q. Ok, so you launched. Now what?For TSPA, we’re in the process of planning some virtual panel discussions that will happen before the end of the year on various topics related to trust and safety. Topics will range from developing wellness and resilience best practices, to operational challenges in the face of current events like the US presidential election and COVID-19. Longer term, we’re working on professional development offerings, like career advancement bootcamps and a job board.Over at TSF, we partnered with the folks right here from Techdirt to launch with a series of case studies from the Copia Institute that illustrate challenging choices that trust and safety professionals face. We are also hosting an ongoing podcast series called Flagged for Review, with interviews from people with expertise in trust and safety.We’re also looking for founding Executive Director, who can get TSPA and TSF off the ground. Send good candidates our way.Q. Sounds pretty good. How do I get involved?Sign up here so we can share more with you about TSPA and TSF in the coming months as we open our membership and develop our offerings. Follow us on Twitter, too. If you work for one of our corporate supporters, you can reach out to your trust and safety leadership as well to find out more. We’d also love to hear from organizations and people who want to help out, or whose work is complementary to our own. We’re excited to further develop and support the community of online trust and safety professionals.
There was much nonsense spewed at this week's Republican National Convention, and as has been expected given the nonsense narrative about "anti-conservative bias" in big tech, there were plenty of people using the podium to whine about how the big internet companies are working against them. Thanks to the folks at Reason for pointing out how utterly stupid and counterfactual this actually is. Indeed, if you actually wanted to watch the RNC speeches (and I'm not sure why you would), the only place to actually watch them uninterrupted was... on those internet platforms that the speakers swore were trying to silence them.
Learn a new hobby with the Green Thumb Gardening Bundle. You'll learn the basics for caring for houseplants, succulents, grass, herbs, and more. Courses also cover garden design, plant propagation, pruning, and building your own planters. The bundle is on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Here's quite an example of the Streisand Effect. Buzzfeed investigative reporters have an incredible new series of stories about the massive new prison/concentration camps built in China to house the various Muslim minority populations they've been imprisoning (Uighurs, Kazakhs and others). But what's relevant from our standpoint here at Techdirt is just how they were able to track this information down. As revealed in a separate article, Buzzfeed's reporters effectively used the Streisand Effect. They looked at the maps provided online by the Chinese internet giant Baidu and spotted a bunch of areas that were "blanked out." The reporters noticed that this graying out was deliberate and different than the standard "general reference tiles" that Baidu would show when it didn't have high resolution enough imagery.Once they realized that something must be going on in those spots, they found many more examples that matched in places where the reported complexes were:
The U.S. telecom industry's monopolization problem shows no sign of slowing down.According to the latest data from Leichtman Research, the cable industry is nearing a 70% market share over fixed line broadband. That's thanks to many reasons, not least of which being that most U.S. phone companies have effectively given up on seriously upgrading their aging DSL lines, driving a greater portion of Americans to the only companies actually offering modern broadband speeds: Charter (Spectrum) and Comcast. Phone companies collectively lost another 150,000 subscribers last quarter, while cable providers added about 1,400,000 users in just three months.For the cable industry, this is all a wonderful thing. Less competition from phone companies, combined with a Trump FCC that couldn't care less about the sector's competition problems, means they can get away with charging higher rates than ever for a service that comes (not coincidentally) with some of the worst customer service ratings of any industry in America (seriously stop and think about that for a moment).With COVID-19 making it clear that broadband is an essential utility, users are flocking to cable connections if they want to remain tethered to their jobs, education, and friends. Charter (Spectrum), as a result, saw 850,000 new customers in one quarter alone, a quarterly record for any broadband provider, at any point in U.S. history:
The facts of this case are pretty ugly so let's just dive right into them. As Lenore Skenazy reported for Reason last year, two government employees decided a single incident of a mother leaving her kids in the car was all the reason they needed to swing by the house and strip-search every one of her six children. The oldest was five years old. The youngest were a pair of 10-month-old twins.Holly Curry stopped at a shop to get some muffins and left her six children in the car while she ran in to get them. She was gone for less than 10 minutes. It was only 67 degrees outside. When she came back to her car, two police officers told her she shouldn't leave her kids in the car and wrote up a "JC3 form" -- a hotline-type alert that would be forwarded to Kentucky's Child Protective Services.The next day a CPS investigator showed up. So did a sheriff's deputy. Here's what happened next:
The world may well feel like a terrible place to you right now. A pandemic is sweeping much of the world, with leaders from many countries playing the ostrich, or else treating the victims as though they were mere idiots. Racial tensions and brutal police practices are on full display, with the most surprising aspects being that they continue even as the world is shining a spotlight on the offenders. World leadership appears to be in full retreat, leaving space for truly nefarious actors to shoulder their way into ever more troubling activities.Just last week, the White Sox beat the Cubs in two out of three. These are dark, dark times indeed.But, hark, all ye who may despair, for I bring good tidings. Mere days ago, we talked about a brand war that appeared to be brewing (heh) between grocerer Aldi and Brewdog, a self-styled "punk brewery." It started when Brewdog released a "Punk IPA", fully in line with its branding motif. Aldi then released a beer called "Anti-Establishment IPA", in a similar looking blue can. This led to Brewdog suggesting on Twitter that maybe it should release a "Yaldi" beer. Aldi said "ALD IPA" would be a better name... and Brewdog agreed, rebranding the beer under that name.Notably absent from the whole episode were cease and desist notices from either side, lawyers filing trademark lawsuits, or any legal machinations of any kind. Instead, there was much good-natured ribbing and a fair amount of congenial creativity at play. In the end, Aldi's social media accounts had a laugh at Brewdog taking its suggestion, and even mentioned it might have to save some aisle space for the newly branded beer.Which, in conclusion, appears to be happening.
For as long as cops have been poorly-behaved, people have talked about defunding the police. This talk has gotten louder in recent years and almost deafening in recent weeks as protests over police brutality erupted around the nation in the wake of the George Floyd killing.But what does it mean to defund the police? In most cases, it doesn't mean getting rid of police departments. It means taking some of the millions spent on providing subpar law enforcement and spreading it around to social services and healthcare professionals to steer people trained to react with violence away from people who would be better served with social service safety nets or interventions by people trained to handle mental health crises.Those opposed to defunding police departments (that's most police officials and officers) say it can't be done without ushering in a criminal apocalypse. Police departments demand an inordinate amount of most cities' budgets but law enforcement officials refuse to agree money should be steered away from them even as cities prepare to redirect some calls cops normally handle to other city services.Cops believe they're the "thin blue line" between order and chaos. They believe they're the only thing standing between good people and criminals. But that's just something they say to make themselves feel better about the babysitting and clerical work that consumes most of their working hours. Josie Duffy Rice's excellent article about the long racist history of American law enforcement brings the receipts. What's standing between us and supposed chaos is barely anything at all.
COVID-19 has disrupted almost everything. Most schools in the United States wrapped up the 2019-2020 school year with zero students in their buildings, hoping to slow the spread of the virus. Distance learning is the new normal -- something deployed quickly with little testing, opening up students to a host of new problems, technical glitches, and in-home surveillance.Zoom replaced classrooms, online products replaced teachers, and everything became a bit more dystopian, adding to the cloud of uncertainty ushered in by the worldwide spread of a novel virus with no proven cure, treatment, or vaccine.Schools soon discovered Zoom and other attendance-ensuring options might be a mistake as miscreants invaded virtual classrooms, distributing sexual and racist content to unsuspecting students and teachers. These issues have yet to be solved as schools ease back into Distance Learning 2.0.Then there's the problem with tests. Teachers and administrators have battled cheating students as long as testing has existed. Now that tests are being taken outside of heavily controlled classrooms, software is stepping in to do the monitoring. That's a problem. It's pretty difficult to invade someone's privacy in a public school, where students give up a certain amount of their rights to engage in group learning.Now that learning is taking place in students' homes, schools and their software providers seem to feel this same relinquishment of privacy should still be expected, even though the areas they're now encroaching on have historically been considered private places. As the EFF reports, testing is now being overseen by Professor Big Brother and his many, many eyes. All of this is in place just to keep students from cheating on tests:
As you may have heard, last week Robert F. Kennedy Jr. and his anti-vax organization "Children's Health Defense" filed a supremely stupid lawsuit against Facebook, Mark Zuckerberg, and fact checking organizations Poynter and Politifact among others. It was filed early last week and I've wanted to write it up since someone sent it to me a few hours after it was filed, but, honestly, this lawsuit is so incredibly stupid that every time I tried to read through it or write about it, my brain just shut down. I've been incredibly unproductive the last week almost entirely because of this silly, silly lawsuit and my brain's unwillingness to believe that a lawsuit this stupid has been filed. And, as regular readers know, I write about a lot of stupid lawsuits. But this one is special.The basis (if you can call it that) for this lawsuit is that Kennedy is mad that Facebook is blocking the medical disinformation he and his organization publish. Because it's wrong. And dangerous. And stupid. They have every right to do this, of course, so the lawsuit has to come up with the dumbest possible reason to argue as a basis for a lawsuit. We've covered lots of other bad lawsuits about content moderation, but the knots Kennedy and his team tie themselves in to make this argument is truly special (and I don't mean that in a positive way):
As various Republicans in Congress have tried to tap dance around the fact that they're the political party of the batshit crazy QAnon conspiracy theory cult, it's actually nice to see Senator Lindsey Graham -- who had become a consistent Trump kissass over the past few years -- speak up in a Vanity Fair interview and call out QAnon for actually being "batshit crazy." He didn't tiptoe around it like some others:
Learn screenwriting the fast, easy, and simple way in the Screenwriting Made Easy 2020 Beginner Course. With 38 lectures, it will go over all the basics that you need for planning your movie script including the idea, structure and characters, scriptwriting, screenplay format, and what to do after writing the first draft. It's on sale for $29.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
If you are interested in having us run Threatcast 2020, or commission some other "serious" games, for your organization or as a group event, please contact us..Back at the end of January, you may recall that we wrote about Threatcast 2020, an in-person election disinformation brainstorming simulation that we had created last year -- created in partnership between our think tank organization, the Copia Institute, and Randy Lubin of Leveraged Play. The game was developed as an in person brainstorming exercise to look at various strategies that might be used to engage in (and counter) disinformation and misinformation strategies around the 2020 election. We had hoped to run the event throughout this year.Of course, soon after we announced it, the pandemic hit the US pretty hard, and the idea of running in-person events disappeared. The game had a variety of specific elements to it, and replacing it via Zoom just wouldn't be the same. After it became clear that the pandemic situation would almost certainly rule out all in-person events this year, we set to making an online version of the game, which we completed a few weeks back. We've now run the event a few times, some for private groups, and one "showcase" event we put on just last week. The event itself was run under Chatham House rules, so we will not identify who attended or what individuals said, but I can talk a bit about what happened at the event. And, just for clarification, we had a wide range of participants -- from companies, non-profits, foundations, academia, and government.One participant who did agree to be named was famed investor Esther Dyson, who told me of the event that "It was fun and funny, but it had enough truth in it to be an amazing and eye-opening experience. This kind of simulation is exactly the preparation people need for the real world, whatever world they operate in." She also noted her key takeaway from the event: "The most compelling message is that the chaos hackers were almost redundant in the ugly world that the two warring parties - or four warring factions - were creating for themselves and all around them. Our wish, in playing as the chaos team, was for a contested election, not a specific winner. And a final key message: it will be important to see who can bring us together - especially AFTER the election."The game itself involves players working in teams as various political factions -- representing a broad coalition of political operatives (not as specific candidates or campaigns) -- and responding to certain situational prompts (and actions by other teams) as they navigate from now through the election (and beyond). Not all of the factions are interested in supporting a happy democratic election. In the event we ran last week, there were four rounds covering the run-up to the election and the immediate aftermath of the election.The players brought a vast array of manipulation and deception to the campaigns and created an atmosphere of paranoia, anger, and confusion. Over the course of the election, the center-right republicans turned their focus to down-ballot races, enabling the GOP to keep the Senate and retake the House of Representatives even as the Democrats won the presidency. However, Trump refused to concede defeat and the game ended with a standoff at the White House. I should note that while there is, within the game, some election modeling to see how well these strategies impact the actual election, the game is not designed to simulate (and certainly not to predict) the outcome of the election, but rather to simulate what kinds of disinformation we'll see (across the board). Along those lines, I'll note that the results of this simulation turned out quite different than the other Threatcast's we have run.Of particular interest in last week's simulation: the amount of chaos. If 2020 has taught us anything, it's that nothing seems off the table, and no idea is too crazy. That played out within our game as well (though, at least one of our judges noted that even some of the more "extreme" ideas presented were ones that were already playing out in real life). Another element that played out, as Esther Dyson noted above, was just how much chaos there is overall -- such that some of the players (who were in the role of chaos agents, trying to create more chaos) found that the other factions were more or less doing their job for them, making it easier to just amplify the crazy concepts others were coming up with. Again, that feels somewhat true to life.I was at least somewhat surprised at the role that TikTok played in the various campaigns. Nearly all of the factions at one point or another came up with a TikTok strategy -- perhaps foreshadowing where the technological battleground will be this year. Not surprisingly, much of the strategy of those supporting the Democrats in the election focused on first influencing what few swing voters remain, and then pivoted heavily towards getting out the vote and increasing voter participation. On the Republican side, there was a split as noted above. More traditional Republicans mostly ignored the Presidential campaign and focused on down ballot races concerning Congress, while the Trump campaign focused heavily on spreading fear, uncertainty, and doubt about... well... everything.Running Threatcast has been quite eye-opening in highlighting the many different ways in which disinformation and misinformation is likely to show up in the next few months. If you're interested in having us run Threatcast 2020 for your organization or group (it's way, way, way better than a Zoom happy hour), please contact us.
Over the last year Bridgefy, a messaging app developed by Twitter cofounder Biz Stone, has been heavily promoted as just perfect for those trying to stand up to oppressive, authoritarian governments. The reason: the app uses both Bluetooth and mesh network routing to let users within a couple hundred meters of one another send group and individual messages -- without their packets ever touching the internet. Originally promoted as more of a solution for those out of reach of traditional wireless, more recently the company has been playing up their product's use for protesters in Belarus, India, the U.S., Zimbabwe, and Hong Kong.The problem: the app is a security and privacy mess, and the company has known since April, yet it's still marketing the app as great for protesters.A new research study, first spotted by Ars Technica, found that the app suffers from numerous vulnerabilities that could actually put protesters at risk:
As one of the most beloved science fiction series in history, it's no surprise that the Star Trek franchise has seen its share of intellectual property flare ups. With Viacom manning the IP enforcement guns, it only makes sense that the series has been the subject of the company's failed attempt to pretend Fair Use doesn't exist, the company's failed attempts at copyright enforcement taking down an authorized Star Trek panel, and the company's failed attempt to actually be good humans to the series' adoring fans.But this is not a story of Viacom failing at yet another thing. Instead, Viacom/CBS, along with Netflix, won in court, defeating an appeal by a video game maker that tried to claim that one episode of Star Trek Discovery infringed on the copyrights for a video game.
Summary: Content moderation questions are not just about the rules that internet platforms create for themselves to enforce: they sometimes involve users themselves enforcing some form of the site’s rules, or their own rules, within spaces created on those platforms. One interesting case study involves the US Army’s esports team and how it has dealt with hecklers.The US Army has a variety of different channels for marketing itself to potential recruits, and lately it’s been using its own “professional esports team” as something of a recruiting tool. Like many esports teams, the US Army team set up a Discord server. After some people felt that the Army was trying to be too “cute” on Twitter -- by tweeting the internet slang “UwU” -- a bunch of users set out to see how quickly they could be banned from the Army’s Discord server. In fact, many users started bragging about how quickly they were being banned -- often by posting links or asking questions related to war crimes, and accusations of the US Army’s involvement in certain war crimes.This carried over to the US Army’s esports streaming channel on Twitch, where it appears that the Army set up certain banned words and phrases, including “war crimes,” leading at least one user -- esports personality Rod “Slasher” Breslau -- to try to get around that filter by typing “w4r cr1me” instead. This made it through and a few seconds later Breslau was banned from the chat by the Army’s esports player Green Beret Joshua “Strotnium” David, with David saying out loud during the stream “have a nice time getting banned, my dude.” Right before saying this David was mocking “internet keyboard monsters” for this kind of activity.When asked about this, the Army told Vice News that it considered the questions to be a form of harassment, and in violation of Twitch’s stated rules, even though it was the Army that was able to set the specific moderation rules on the account and choose who to ban:
Your rights are more protected in some areas of the country than in others. That's the conclusion reached by Reuters and its examination of qualified immunity cases across the country.Reuters' first report on qualified immunity showed we have the Supreme Court to blame for the high bar plaintiffs must leap to hold police officers accountable for rights violations. The doctrine was created by the court back in 1967. Subsequent decisions have made it easier for cops to escape judgment by limiting the lower courts' ability to hand down precedent on rights violations. Fewer precedential decisions means fewer cops "know" their violation of citizens' rights was wrong, leading to more dismissals at summary judgment where all an officer has to do is raise the qualified immunity defense. If no case is on point, the cop wins and the victim loses.But courts can interpret Supreme Court precedent differently, leading to some very noticeable variations in qualified immunity cases. This report shows the worst place to sue a police officer is the Fifth Circuit. This circuit covers Texas, Louisiana, and Mississippi. If you're a terrible cop, the best place to work is Texas, where the Appeals Court will side with you more often than in any other state.
With less than eighty days until Election Day and a pandemic surging across the country, disinformation continues to spread across social media platforms, posing dangers to public health, voting rights, and our democracy. Time is short and social media platforms need to ramp up their efforts to combat election disinformation and online voter suppression — just as they have with COVID-19 disinformation.Social media platforms have content moderation policies in place to counter both COVID-19 disinformation and election disinformation. However, platforms seem to be taking a more proactive approach to combating COVID-19 disinformation by building tools, spending significant resources, and most importantly, changing their content moderation policies to reflect the evolving nature of inaccurate information about the virus.To be clear, COVID-19 disinformation is still rapidly spreading online. However, the platforms’ actions on the pandemic demonstrate they can develop specific policies to address and remove this harmful content. Platforms’ efforts to mitigate election disinformation, on the other hand, are falling short, due to the significant gaps that remain in their content moderation policies. Platforms should seriously examine how their COVID-19 disinformation policies can apply to reducing the spread of election disinformation and online voter suppressionDisinformation on social media can spread in a variety of ways including (1) the lack of prioritizing authoritative sources of information and third-party fact-checking; (2) algorithmic amplification and targeting; and (3) platform self-monetization. Social media platforms have revised their content moderation policies on COVID-19 to address many of the ways disinformation can spread about the pandemic.For example, Facebook, Twitter, and YouTube all direct their users to authoritative sources of COVID-19 information. In addition, Facebook works with fact-checking organizations to review and rate pandemic-related content; YouTube utilizes fact-checking information panels; and Twitter is beginning to add fact-checked warning labels. Twitter has also taken the further step of expanding its definition on what it considers harmful content in order to capture and remove more inaccurate content related to the pandemic. To reduce the harms of algorithmic amplification, Facebook uses automated tools to downrank COVID-19 disinformation. Additionally, Facebook places restrictions on its advertisement policy to prevent the sale of fraudulent medical equipment and the platform prohibits ads that use exploitative tactics to create a panic over the pandemic as two methods for stopping the monetization of pandemic-related disinformation.These content moderation policies have resulted in social media platforms taking down significant amounts of COVID-19 disinformation including recent posts from President Trump. Again, disinformation about the pandemic persists on social media. But these actions show the willingness of platforms to take action and reduce the spread of this content.In comparison, social media platforms have not been as proactive in enforcing or developing new policies to respond to the spread of election disinformation. Platforms’ civic integrity policies are primarily limited to prohibiting inaccurate information about the processes of voting (e.g., misrepresentations about the dates and times people can vote). But even these limited policies are not being consistently enforced.For example, Twitter placed a warning label on one of Trump’s inaccurate tweets about mail-in-voting procedures but have taken no action on other similar tweets from the president. Further, social media platforms current policies may not be broad enough to take into account emerging voter suppression narratives about voter fraud and election rigging. Indeed, Trump has pushed inaccurate content about mail-in-voting across social media platforms, falsely claiming it will lead to voter fraud and election rigging. With many states expanding their mail-in-voting procedures due to the pandemic, Trump’s continued inaccurate attacks on this method of voting threaten to confuse and discourage eligible voters from casting their ballot.Platform content moderation policies also contain significant holes that bad actors continue to exploit to proliferate online voter suppression. For example, Facebook refuses to fact-check political ads even if they contain demonstrably false information that discourage people from voting. President Trump’s campaign has taken advantage of this by flooding the platform with hundreds of ads that spread disproven claims about voter fraud. Political ads with election disinformation can be algorithmically amplified or micro-targeted to specific communities to suppress their vote.Social media platforms including Facebook and Twitter have recently announced new policies they will be rolling out to fight online voter suppression. As outlined above, there are some lessons platforms can learn from their efforts in combatting COVID-19 disinformation.First, social media platforms should prioritize directing their users to authoritative sources of information when it comes to the election. Authoritative sources of information include state and local election officials. Second, platforms must consistently enforce and expand their content moderation policies as appropriate to remove election disinformation. Like their COVID-19 disinformation policies, platforms should build better tools and expand definitions of harmful content when it comes to online voter suppression. Finally, platforms must address the structural problems that allow bad actors to engage in online voter suppression tactics including algorithmic amplification and targeted advertisements.COVID-19 – as dangerous and terrifying an experience as it has been – has at least proven that when platforms want to step up their efforts to stop the spread of disinformation, they can. If we want authentic civic engagement and a healthy democracy that enables everyone’s voices to be heard, then we need digital platforms to ramp up their fight against online voter suppression, too. Our voices – and the voices of those in marginalized communities -- depend on it.Just as combating COVID-19 disinformation is important to our public health, reducing the spread of election disinformation is critical to authentic civic engagement and a healthy democracy. As part of our efforts to stop the spread of online voter suppression, Common Cause will continue to monitor social media platforms for election disinformation and encourages readers to report any inaccurate content to our tip line. At the end of the day, platforms themselves must step up their fight against new online voter suppression efforts.Yosef Getachew serves as the Media & Democracy Program Director for Common Cause. Prior to joining Common Cause, Yosef served as a Policy Fellow at Public Knowledge where he’s worked on a variety of technology and communications issues. His work has focused on broadband privacy, broadband access and affordability, and other consumer issues.
On Monday there was a... shall we say... contentious first hearing in the antitrust fight/contract negotiation between Apple and Epic over what Apple charges (and what it charges for...) in the iOS app store. The issue for the hearing was Epic's request for temporary restraining orders against Apple on two points: first, it wanted a restraining order that would force Apple to return Fortnite to the app store. Second, was a restraining order on Apple's plan to basically pull Epic's developer license for the wider Unreal Engine.As the judge made pretty clear would happen during the hearing, she rejected the TRO for Fortnite, but allowed it for the Unreal Engine. The shortest explanation: Apple removed Fortnite because of a move by Epic. So Epic was the cause of the removal. The threat to pull access for the Unreal Engine, however, seemed punitive in response to the lawsuit, and not for any legitimate reason.More specifically, for a TRO to issue, the key issue is irreparable harm (i.e., you can get one if you can show that without one there will be harm that can't be easily repaired through monetary or other sanctions). But here, as the court notes, Epic, not Apple, created the first mess, and so it can fix it by complying with the contract. So there is no irreparable harm, since it can solve the issue. The opposite is true of the Unreal Engine, though:
Fluent City is an innovative language training organization, offering instruction to individuals, groups, and businesses in 11 different languages. Their online group language classes are small, social, and conversation-based so you're ready to strike up a conversation, even in the real world. With expert instructors from all over the world and the latest language technology, Fluent City builds engaging and highly relevant lesson plans and learning activities. Get fluent faster through classes that emphasize real human connection. It's on sale for $300.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Earlier this year, the Wall Street Journal revealed that ICE and CBP were buying location data from third-party data brokers -- something that seemed like a calculated move to dodge the requirements of the Supreme Court's Carpenter decision. There's a warrant requirement for historical cell site location data, but the two agencies appear to believe gathering tons of "pseudonymized" data to "help identify and locate" undocumented immigrants isn't a Fourth Amendment problem.At this point, they're probably right. They may not be correct but they don't have court precedent telling them they can't do this. Not yet. So, they're doing it. It may not be immediately invasive as approaching a cell service provider for weeks or months of location data related to a single person, but this concerted effort to avoid running anything by a judge suggests even the DHS feels obtaining data this way is quasi-legal at best.In late June, the House Committee on Oversight and Reform opened an investigation into Venntel's sale of location data to ICE and CBP. The Committee asked Venntel to hand over information about its data sales, whether or not it obtained consent from phone users to gather this data, and whether it applied any restrictions to the use of data by government agencies. The answers to the Committee's questions were due in early July. So far, Venntel has yet to respond.Venntel's business hasn't slowed despite being investigated by Congress. Joseph Cox reports for Motherboard that CBP has just signed another deal with the data broker.
As we've been noting, Trump's executive order attempting to ban TikTok is not only legally unsound, it's not coherent policy. Chinese state hackers, with their unlimited budgets, can simply obtain this (and far greater) data from any of the thousands of companies in the existing, unaccountable international adtech sector, our poorly secured communications networks, or the millions of Chinese-made IOT devices or "smart" products Americans attach to home and business networks with reckless abandon. The U.S. has no privacy law and is a mess on the privacy and security fronts. We're an easy mark and TikTok is the very least of our problems.With that as backdrop, it's clear that most of the biggest TikTok pearl clutchers in the Trump administration couldn't care less about actual U.S. consumer security and privacy. After all, this is the same administration that refuses to shore up election security, strictly opposes even the most basic of privacy laws for the internet era, and has been working tirelessly to erode essential security protections like encryption. If the U.S. was actually interested in shoring up U.S. security and privacy, we'd craft coherent, over-arching policies to address all of our security and privacy problems, not just those that originate in China.Trump's real motivations for the ban lie elsewhere. As a delusional narcissist, some of his motivation is the attempt to portray himself as a savvy businessman, extracting leverage for a trade war with China he clearly doesn't understand isn't working, and is actually harming Americans. Spreading additional xenophobia as a party platform is also an obvious goal. But it's also becoming increasingly clear that at least some of the recent TikTok animosity is originating with Trump's newfound BFFs over at Facebook, who've been hammering Trump with claims that Chinese platforms "don’t share Facebook’s commitment to freedom of expression," and "represent a risk to American values and technological supremacy.":
The Syrian civil war has led to great human suffering, with hundreds of thousands killed, and millions displaced. Another victim has been the region's rich archaeological heritage. Many of the most important sites have been seriously and intentionally damaged by the Islamic State of Iraq and Syria (ISIS). For example, the Temple of Bel, regarded as among the best preserved at the ancient city of Palmyra, was almost completely razed to the ground. In the past, more than 150,000 tourists visited the site each year. Like most tourists, many of them took photos of the Temple of Bel. The UC San Diego Library's Digital Media Lab had the idea of taking some of those photos, with their many different viewpoints, and to combine them using AI techniques into a detailed 3D reconstruction of the temple:
While China-bashing is all the rage right now (much of it deserved given the country's abhorrent human rights practices), it's sort of amazing what a difference a year makes. While the current focus of ire towards the Chinese government seems focused on the COVID-19 pandemic and a few mobile dance apps, never mind the fully embedded nature of Chinese-manufactured technology in use every day in the West, late 2019 was all about China's translucent skin. Much of that had to do with China's inching towards a slow takeover of Hong Kong and how several corporate interests in the West reacted to it. Does anyone else remember when our discussion about China was dominated by stories dealing with Blizzard banning Hearthstone players for supporting Hong Kong and American professional sports leagues looking like cowards in the face of a huge economic market?Yeah, me neither. But with all that is going on the world and all of the criticism, deserved or otherwise, being lobbed at the Chinese government, it's worth pointing out that the problems of last year are still going on. And, while Google most recently took something of a stand against the aggression on Hong Kong specifically, other companies are still bowing to China's thin-skin in heavy-handed ways. The latest example of this is an admittedly relatively trivial attempt by Activision to kneel at the altar of Chinese historical censorship.
Let's start this one by noting that "COVID parties" are an incredibly dumb and insanely dangerous idea. A few people have suggested them as a way to expose a bunch of people to COVID-19 in the belief that if it's mostly young and healthy people, they can become immune by first suffering through having the disease, with a lower likelihood of dying. Of course, this leaves out the very real possibility of other permanent damage that getting COVID-19 might have and (much worse) the wider impact on other people -- including those who might catch COVID-19 from someone who got it at one of these "parties." It's also not at all clear how widespread the idea of COVID parties are. There have been reports of them, but most of them have been shown to be urban legends and hoaxes.Whether or not COVID parties are actually real or not, some jackass decided to set up an Instagram account called "asu_covid.parties," supposedly to promote such parties among students of Arizona State University as they return to campus. The account (incorrectly and dangerously) claimed that COVID-19 is "a big fat hoax." Of course, if it were a hoax, why would you organize parties to infect people? Logic is not apparently a strong suit. Arizona State University appears to believe that the account was created by someone (or some people) in Russia to "sow confusion and conflict." And that may be true.
The COVID-19 pandemic is far from over, and as it rages on we're learning a lot about technology's role in a situation like this — but it's also worth looking forward, and thinking about how tech will be involved in the process of repairing and recovering from the damage the pandemic has done. This week, we're joined by TechNYC executive director Julie Samuels to discuss the role of technology in a post-pandemic world.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
Get your First & Fourth Emojiment gear in the Techdirt store on Threadless »We've got two new additions to our line of face masks in the Techdirt store on Threadless: our popular emoji-fied versions of the First and Fourth Amendments. We've considered adding more amendments to this line, but not all of them translate so easily — so for now, you can enjoy these two extremely important ones in face mask form!All the face masks are available in two versions (premium and standard) as well as youth sizes. And of course, the designs are also available on a wide variety of other products including t-shirts, hoodies, mugs, buttons, and more! Check out the Techdirt store on Threadless and order yours today.