Feed techdirt Techdirt

Favorite IconTechdirt

Link https://www.techdirt.com/
Feed https://www.techdirt.com/techdirt_rss.xml
Updated 2025-08-20 05:46
More Video Game Art Is Being Sanitized, Likely To Appease China
Mere days ago, we were talking about Activision's decision to do a delete and replace for the trailer for the latest Call of Duty game worldwide due to pressure from the Chinese government. That pressure came about over 1 second's worth of footage in the trailer that showed an image from pro-democracy protests in 1989. While only a trailer for an un-released game, the point I attempted to make is that this was a terrible precedent to set. It's one thing to sanitize games, a form of art, for distribution within China. We could spend hours arguing over just how willing companies should be in bowing to the thin-skin of the Chinese government when it comes to art in favor of making huge sums of money, but that's at least understandable. It makes far less sense to apply those changes to the larger world, where China's pearl-clutching sensibilities aren't a thing.And now we're seeing this continue to occur. Kotaku has a quick write up for several changes made to a handful of re-released retro games and this appears to be more of the same. We'll start with the re-release of Baseball Stars 2, a Neo Geo classic.
Content Moderation Case Study: Facebook Responds To A Live-streamed Mass Shooting (March 2019)
Summary:On March 15, 2019, the unimaginable happened. A Facebook user -- utilizing the platform's live-streaming option -- filmed himself shooting mosque attendees in Christchurch, New Zealand.By the end of the shooting, the shooter had killed 51 people and injured 49. Only the first shooting was live-streamed, but Facebook was unable to end the stream before it had been viewed by a few hundred users and shared by a few thousand more.The stream was removed by Facebook almost an hour after it appeared, thanks to user reports. The moderation team began working immediately to find and delete re-uploads by other users. Violent content is generally a clear violation of Facebook's terms of service, but context does matter. Not every video of violent content merits removal, but Facebook felt this one did.The delay in response was partly due to limitations in Facebook's automated moderation efforts. As Facebook admitted roughly a month after the shooting, the shooter's use of a head-mounted camera made it much more difficult for its AI to make a judgment call on the content of the footage.Facebook's efforts to keep this footage off the platform continue to this day. The footage has migrated to other platforms and file-sharing sites -- an inevitability in the digital age. Even with moderators knowing exactly what they're looking for, platform users are still finding ways to post the shooter's video to Facebook. Some of this is due to the sheer number of uploads moderators are dealing with. The Verge reported the video was re-uploaded 1.5 million times in the 48 hours following the shooting, with 1.2 million of those automatically blocked by moderation AI.Decisions to be made by Facebook:
Wireless Carriers Once Again Fight Efforts At More Accurate Wireless Availability Maps
If you live in a rural area, or have driven across the country anytime in the last five years, you probably already know the telecom industry's wireless coverage maps are misleading -- at best. In turn, the data they deliver to the FCC is also highly suspect. Regardless, this is the data being used when we shape policy and determine which areas get broadband subsidies, and, despite some notable progress in improving this data in recent years, it's still a major problem. Last year, for example, the Trump FCC quietly buried a report showing how major wireless carriers routinely overstate wireless voice and data availability.Facing massive political pressure from pissed off (and bipartisan) state lawmakers eager for a bigger slice of federal subsidies, the FCC has started taking the basic steps necessary to start to improve things. One of those improvements is a recent proposal (pdf) that would include requiring carriers actually drive around testing their network performance so they can provide more accurate, real-world data. This isn't a huge ask. But T-Mobile and AT&T are fighting back against the proposal, claiming it's "too expensive":
The Trust & Safety Professional Association: Advancing The Trust And Safety Profession Through A Shared Community Of Practice
For decades, trust and safety professionals in content moderation, fraud and risk, and safety — have faced enormous challenges, often under intense scrutiny. In recent years, it’s become even more clear that the role of trust and safety professionals are both critically important and difficult. In 2020 alone, we’ve seen an increasing need for this growing class of professionals to combat a myriad of online abuse related to systemic racism, police violence, and COVID-19 — such as hate speech, misinformation, price gouging, and phishing — while keeping a safe space for connecting people with vital, authoritative information, and with each other.Despite the enormous impact trust and safety individuals have towards protecting the online and offline safety of people, the professional community has historically been dispersed, siloed, and informally organized. To date — unlike, say, in privacy — no organization has focused on the needs of trust and safety professionals in a way that builds a shared community of practice.This is why we founded the Trust & Safety Professional Association (TSPA) and the Trust & Safety Foundation Project (TSF) — something we think is long overdue. TSPA is a new, nonprofit, membership-based organization that will support the global community of professionals who develop and enforce principles and policies that define acceptable behavior online. TSF will focus on improving society’s understanding of trust and safety, including the operational practices used in content moderation, through educational programs and multidisciplinary research.Since we launched in June, we’ve gotten a number of questions about what TSPA and TSF will (and won’t) do. So we thought we’d tackle them right here, and share more with you about who’s included, why we launched now, and what our vision is for the future. You can also hear us talk more about both organizations on episode 247 of the Techdirt podcast. And if you want to know even more, we’re all ears!Q&AQ. How do you define trust and safety? Don’t you mean content moderation?We define trust and safety professionals as the global community of people who develop and enforce policies that define acceptable behavior online.Content moderation is a big part of trust and safety, and the area that gets the most public attention these days. But trust and safety also includes the people who tackle financial risk and fraud, those who process law enforcement requests, engineers who work on automating these policies, and more. TSPA is for the professionals who work in all of those areas.Q. What’s the difference between TSPA and TSF?TSPA is a 501(c)(6) membership-based organization for professionals who develop and enforce principles and policies that define acceptable behavior and content online. Think ABA for lawyers, or IAPP for privacy people, but for those working in trust and safety, who can use TSPA to connect with a network of peers, find resources for career development, and exchange best practices.TSF is a fiscally sponsored project of the Internet Education Foundation and focuses on research.The two organizations are complementary, but have distinct missions and serve different communities. TSPA is a membership organization, while TSF has a charitable purpose.Q. Why are you doing this now?We first started discussing the need for something like this more than two years ago, in the wake of the first Content Moderation at Scale (COMO) conference in Santa Clara. The conference was convened by one of TSPA’s founders and board members, Santa Clara University law professor Eric Goldman, which you can read about right here. After the first COMO get-together It was clear that there was a need for more community amongst people who do trust and safety work.Q. Are you taking positions on policy issues or lobbying?Nope. We’re not advocating for public policy positions on behalf of corporate supporters or anyone else. We do want to help people better understand trust and safety as a field, as well as shed light on the challenges that trust and safety professionals face.Q. Ok, so you launched. Now what?For TSPA, we’re in the process of planning some virtual panel discussions that will happen before the end of the year on various topics related to trust and safety. Topics will range from developing wellness and resilience best practices, to operational challenges in the face of current events like the US presidential election and COVID-19. Longer term, we’re working on professional development offerings, like career advancement bootcamps and a job board.Over at TSF, we partnered with the folks right here from Techdirt to launch with a series of case studies from the Copia Institute that illustrate challenging choices that trust and safety professionals face. We are also hosting an ongoing podcast series called Flagged for Review, with interviews from people with expertise in trust and safety.We’re also looking for founding Executive Director, who can get TSPA and TSF off the ground. Send good candidates our way.Q. Sounds pretty good. How do I get involved?Sign up here so we can share more with you about TSPA and TSF in the coming months as we open our membership and develop our offerings. Follow us on Twitter, too. If you work for one of our corporate supporters, you can reach out to your trust and safety leadership as well to find out more. We’d also love to hear from organizations and people who want to help out, or whose work is complementary to our own. We’re excited to further develop and support the community of online trust and safety professionals.
As Speakers At The RNC Whined About Big Tech Bias, You Could Only Watch The Full Convention Because Of 'Big Tech'
There was much nonsense spewed at this week's Republican National Convention, and as has been expected given the nonsense narrative about "anti-conservative bias" in big tech, there were plenty of people using the podium to whine about how the big internet companies are working against them. Thanks to the folks at Reason for pointing out how utterly stupid and counterfactual this actually is. Indeed, if you actually wanted to watch the RNC speeches (and I'm not sure why you would), the only place to actually watch them uninterrupted was... on those internet platforms that the speakers swore were trying to silence them.
Daily Deal: The Green Thumb Gardening Bundle
Learn a new hobby with the Green Thumb Gardening Bundle. You'll learn the basics for caring for houseplants, succulents, grass, herbs, and more. Courses also cover garden design, plant propagation, pruning, and building your own planters. The bundle is on sale for $20.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
China's Efforts To Hide Its Muslim Concentration Camps Helped Reporters To Find Them
Here's quite an example of the Streisand Effect. Buzzfeed investigative reporters have an incredible new series of stories about the massive new prison/concentration camps built in China to house the various Muslim minority populations they've been imprisoning (Uighurs, Kazakhs and others). But what's relevant from our standpoint here at Techdirt is just how they were able to track this information down. As revealed in a separate article, Buzzfeed's reporters effectively used the Streisand Effect. They looked at the maps provided online by the Chinese internet giant Baidu and spotted a bunch of areas that were "blanked out." The reporters noticed that this graying out was deliberate and different than the standard "general reference tiles" that Baidu would show when it didn't have high resolution enough imagery.Once they realized that something must be going on in those spots, they found many more examples that matched in places where the reported complexes were:
U.S. Cable Broadband Monopolies Close In On 70% Broadband Market Share
The U.S. telecom industry's monopolization problem shows no sign of slowing down.According to the latest data from Leichtman Research, the cable industry is nearing a 70% market share over fixed line broadband. That's thanks to many reasons, not least of which being that most U.S. phone companies have effectively given up on seriously upgrading their aging DSL lines, driving a greater portion of Americans to the only companies actually offering modern broadband speeds: Charter (Spectrum) and Comcast. Phone companies collectively lost another 150,000 subscribers last quarter, while cable providers added about 1,400,000 users in just three months.For the cable industry, this is all a wonderful thing. Less competition from phone companies, combined with a Trump FCC that couldn't care less about the sector's competition problems, means they can get away with charging higher rates than ever for a service that comes (not coincidentally) with some of the worst customer service ratings of any industry in America (seriously stop and think about that for a moment).With COVID-19 making it clear that broadband is an essential utility, users are flocking to cable connections if they want to remain tethered to their jobs, education, and friends. Charter (Spectrum), as a result, saw 850,000 new customers in one quarter alone, a quarterly record for any broadband provider, at any point in U.S. history:
Federal Court: No, You Fucking May Not Force Your Way Into A Home And Strip Search Six Very Young Children
The facts of this case are pretty ugly so let's just dive right into them. As Lenore Skenazy reported for Reason last year, two government employees decided a single incident of a mother leaving her kids in the car was all the reason they needed to swing by the house and strip-search every one of her six children. The oldest was five years old. The youngest were a pair of 10-month-old twins.Holly Curry stopped at a shop to get some muffins and left her six children in the car while she ran in to get them. She was gone for less than 10 minutes. It was only 67 degrees outside. When she came back to her car, two police officers told her she shouldn't leave her kids in the car and wrote up a "JC3 form" -- a hotline-type alert that would be forwarded to Kentucky's Child Protective Services.The next day a CPS investigator showed up. So did a sheriff's deputy. Here's what happened next:
Aldi, Brewdog Brand War Ends In The Best Possible Way: Collaboration
The world may well feel like a terrible place to you right now. A pandemic is sweeping much of the world, with leaders from many countries playing the ostrich, or else treating the victims as though they were mere idiots. Racial tensions and brutal police practices are on full display, with the most surprising aspects being that they continue even as the world is shining a spotlight on the offenders. World leadership appears to be in full retreat, leaving space for truly nefarious actors to shoulder their way into ever more troubling activities.Just last week, the White Sox beat the Cubs in two out of three. These are dark, dark times indeed.But, hark, all ye who may despair, for I bring good tidings. Mere days ago, we talked about a brand war that appeared to be brewing (heh) between grocerer Aldi and Brewdog, a self-styled "punk brewery." It started when Brewdog released a "Punk IPA", fully in line with its branding motif. Aldi then released a beer called "Anti-Establishment IPA", in a similar looking blue can. This led to Brewdog suggesting on Twitter that maybe it should release a "Yaldi" beer. Aldi said "ALD IPA" would be a better name... and Brewdog agreed, rebranding the beer under that name.Notably absent from the whole episode were cease and desist notices from either side, lawyers filing trademark lawsuits, or any legal machinations of any kind. Instead, there was much good-natured ribbing and a fair amount of congenial creativity at play. In the end, Aldi's social media accounts had a laugh at Brewdog taking its suggestion, and even mentioned it might have to save some aisle space for the newly branded beer.Which, in conclusion, appears to be happening.
It's Time To Start Dismantling One Of The Nation's Oldest Racist Institutions: Law Enforcement
For as long as cops have been poorly-behaved, people have talked about defunding the police. This talk has gotten louder in recent years and almost deafening in recent weeks as protests over police brutality erupted around the nation in the wake of the George Floyd killing.But what does it mean to defund the police? In most cases, it doesn't mean getting rid of police departments. It means taking some of the millions spent on providing subpar law enforcement and spreading it around to social services and healthcare professionals to steer people trained to react with violence away from people who would be better served with social service safety nets or interventions by people trained to handle mental health crises.Those opposed to defunding police departments (that's most police officials and officers) say it can't be done without ushering in a criminal apocalypse. Police departments demand an inordinate amount of most cities' budgets but law enforcement officials refuse to agree money should be steered away from them even as cities prepare to redirect some calls cops normally handle to other city services.Cops believe they're the "thin blue line" between order and chaos. They believe they're the only thing standing between good people and criminals. But that's just something they say to make themselves feel better about the babysitting and clerical work that consumes most of their working hours. Josie Duffy Rice's excellent article about the long racist history of American law enforcement brings the receipts. What's standing between us and supposed chaos is barely anything at all.
Mass Biometric Scanning Of Students Is COVID-19's Latest Dystopian Twist
COVID-19 has disrupted almost everything. Most schools in the United States wrapped up the 2019-2020 school year with zero students in their buildings, hoping to slow the spread of the virus. Distance learning is the new normal -- something deployed quickly with little testing, opening up students to a host of new problems, technical glitches, and in-home surveillance.Zoom replaced classrooms, online products replaced teachers, and everything became a bit more dystopian, adding to the cloud of uncertainty ushered in by the worldwide spread of a novel virus with no proven cure, treatment, or vaccine.Schools soon discovered Zoom and other attendance-ensuring options might be a mistake as miscreants invaded virtual classrooms, distributing sexual and racist content to unsuspecting students and teachers. These issues have yet to be solved as schools ease back into Distance Learning 2.0.Then there's the problem with tests. Teachers and administrators have battled cheating students as long as testing has existed. Now that tests are being taken outside of heavily controlled classrooms, software is stepping in to do the monitoring. That's a problem. It's pretty difficult to invade someone's privacy in a public school, where students give up a certain amount of their rights to engage in group learning.Now that learning is taking place in students' homes, schools and their software providers seem to feel this same relinquishment of privacy should still be expected, even though the areas they're now encroaching on have historically been considered private places. As the EFF reports, testing is now being overseen by Professor Big Brother and his many, many eyes. All of this is in place just to keep students from cheating on tests:
Robert F. Kennedy Jr.'s Insanely Stupid Lawsuit Against Facebook
As you may have heard, last week Robert F. Kennedy Jr. and his anti-vax organization "Children's Health Defense" filed a supremely stupid lawsuit against Facebook, Mark Zuckerberg, and fact checking organizations Poynter and Politifact among others. It was filed early last week and I've wanted to write it up since someone sent it to me a few hours after it was filed, but, honestly, this lawsuit is so incredibly stupid that every time I tried to read through it or write about it, my brain just shut down. I've been incredibly unproductive the last week almost entirely because of this silly, silly lawsuit and my brain's unwillingness to believe that a lawsuit this stupid has been filed. And, as regular readers know, I write about a lot of stupid lawsuits. But this one is special.The basis (if you can call it that) for this lawsuit is that Kennedy is mad that Facebook is blocking the medical disinformation he and his organization publish. Because it's wrong. And dangerous. And stupid. They have every right to do this, of course, so the lawsuit has to come up with the dumbest possible reason to argue as a basis for a lawsuit. We've covered lots of other bad lawsuits about content moderation, but the knots Kennedy and his team tie themselves in to make this argument is truly special (and I don't mean that in a positive way):
Lindsey Graham Says We Need To Get Rid Of Section 230 To Sue 'Batshit Crazy' QAnon. That's Not How Any Of This Works.
As various Republicans in Congress have tried to tap dance around the fact that they're the political party of the batshit crazy QAnon conspiracy theory cult, it's actually nice to see Senator Lindsey Graham -- who had become a consistent Trump kissass over the past few years -- speak up in a Vanity Fair interview and call out QAnon for actually being "batshit crazy." He didn't tiptoe around it like some others:
Daily Deal: Screenwriting Made Easy
Learn screenwriting the fast, easy, and simple way in the Screenwriting Made Easy 2020 Beginner Course. With 38 lectures, it will go over all the basics that you need for planning your movie script including the idea, structure and characters, scriptwriting, screenplay format, and what to do after writing the first draft. It's on sale for $29.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
We Ran Our Online Election Disinformation Simulation Game And There's Plenty To Be Worried About
If you are interested in having us run Threatcast 2020, or commission some other "serious" games, for your organization or as a group event, please contact us..Back at the end of January, you may recall that we wrote about Threatcast 2020, an in-person election disinformation brainstorming simulation that we had created last year -- created in partnership between our think tank organization, the Copia Institute, and Randy Lubin of Leveraged Play. The game was developed as an in person brainstorming exercise to look at various strategies that might be used to engage in (and counter) disinformation and misinformation strategies around the 2020 election. We had hoped to run the event throughout this year.Of course, soon after we announced it, the pandemic hit the US pretty hard, and the idea of running in-person events disappeared. The game had a variety of specific elements to it, and replacing it via Zoom just wouldn't be the same. After it became clear that the pandemic situation would almost certainly rule out all in-person events this year, we set to making an online version of the game, which we completed a few weeks back. We've now run the event a few times, some for private groups, and one "showcase" event we put on just last week. The event itself was run under Chatham House rules, so we will not identify who attended or what individuals said, but I can talk a bit about what happened at the event. And, just for clarification, we had a wide range of participants -- from companies, non-profits, foundations, academia, and government.One participant who did agree to be named was famed investor Esther Dyson, who told me of the event that "It was fun and funny, but it had enough truth in it to be an amazing and eye-opening experience. This kind of simulation is exactly the preparation people need for the real world, whatever world they operate in." She also noted her key takeaway from the event: "The most compelling message is that the chaos hackers were almost redundant in the ugly world that the two warring parties - or four warring factions - were creating for themselves and all around them. Our wish, in playing as the chaos team, was for a contested election, not a specific winner. And a final key message: it will be important to see who can bring us together - especially AFTER the election."The game itself involves players working in teams as various political factions -- representing a broad coalition of political operatives (not as specific candidates or campaigns) -- and responding to certain situational prompts (and actions by other teams) as they navigate from now through the election (and beyond). Not all of the factions are interested in supporting a happy democratic election. In the event we ran last week, there were four rounds covering the run-up to the election and the immediate aftermath of the election.The players brought a vast array of manipulation and deception to the campaigns and created an atmosphere of paranoia, anger, and confusion. Over the course of the election, the center-right republicans turned their focus to down-ballot races, enabling the GOP to keep the Senate and retake the House of Representatives even as the Democrats won the presidency. However, Trump refused to concede defeat and the game ended with a standoff at the White House. I should note that while there is, within the game, some election modeling to see how well these strategies impact the actual election, the game is not designed to simulate (and certainly not to predict) the outcome of the election, but rather to simulate what kinds of disinformation we'll see (across the board). Along those lines, I'll note that the results of this simulation turned out quite different than the other Threatcast's we have run.Of particular interest in last week's simulation: the amount of chaos. If 2020 has taught us anything, it's that nothing seems off the table, and no idea is too crazy. That played out within our game as well (though, at least one of our judges noted that even some of the more "extreme" ideas presented were ones that were already playing out in real life). Another element that played out, as Esther Dyson noted above, was just how much chaos there is overall -- such that some of the players (who were in the role of chaos agents, trying to create more chaos) found that the other factions were more or less doing their job for them, making it easier to just amplify the crazy concepts others were coming up with. Again, that feels somewhat true to life.I was at least somewhat surprised at the role that TikTok played in the various campaigns. Nearly all of the factions at one point or another came up with a TikTok strategy -- perhaps foreshadowing where the technological battleground will be this year. Not surprisingly, much of the strategy of those supporting the Democrats in the election focused on first influencing what few swing voters remain, and then pivoted heavily towards getting out the vote and increasing voter participation. On the Republican side, there was a split as noted above. More traditional Republicans mostly ignored the Presidential campaign and focused on down ballot races concerning Congress, while the Trump campaign focused heavily on spreading fear, uncertainty, and doubt about... well... everything.Running Threatcast has been quite eye-opening in highlighting the many different ways in which disinformation and misinformation is likely to show up in the next few months. If you're interested in having us run Threatcast 2020 for your organization or group (it's way, way, way better than a Zoom happy hour), please contact us.
Bridgefy, A Messaging App Hyped As Great For Protesters, Is A Security Mess
Over the last year Bridgefy, a messaging app developed by Twitter cofounder Biz Stone, has been heavily promoted as just perfect for those trying to stand up to oppressive, authoritarian governments. The reason: the app uses both Bluetooth and mesh network routing to let users within a couple hundred meters of one another send group and individual messages -- without their packets ever touching the internet. Originally promoted as more of a solution for those out of reach of traditional wireless, more recently the company has been playing up their product's use for protesters in Belarus, India, the U.S., Zimbabwe, and Hong Kong.The problem: the app is a security and privacy mess, and the company has known since April, yet it's still marketing the app as great for protesters.A new research study, first spotted by Ars Technica, found that the app suffers from numerous vulnerabilities that could actually put protesters at risk:
On Appeal, 'Star Trek Discovery' Still Doesn't Infringe On Video Game's Copyright
As one of the most beloved science fiction series in history, it's no surprise that the Star Trek franchise has seen its share of intellectual property flare ups. With Viacom manning the IP enforcement guns, it only makes sense that the series has been the subject of the company's failed attempt to pretend Fair Use doesn't exist, the company's failed attempts at copyright enforcement taking down an authorized Star Trek panel, and the company's failed attempt to actually be good humans to the series' adoring fans.But this is not a story of Viacom failing at yet another thing. Instead, Viacom/CBS, along with Netflix, won in court, defeating an appeal by a video game maker that tried to claim that one episode of Star Trek Discovery infringed on the copyrights for a video game.
Content Moderation Case Study: US Army Bans Users For Asking About War Crimes On Twitch & Discord (July 2020)
Summary: Content moderation questions are not just about the rules that internet platforms create for themselves to enforce: they sometimes involve users themselves enforcing some form of the site’s rules, or their own rules, within spaces created on those platforms. One interesting case study involves the US Army’s esports team and how it has dealt with hecklers.The US Army has a variety of different channels for marketing itself to potential recruits, and lately it’s been using its own “professional esports team” as something of a recruiting tool. Like many esports teams, the US Army team set up a Discord server. After some people felt that the Army was trying to be too “cute” on Twitter -- by tweeting the internet slang “UwU” -- a bunch of users set out to see how quickly they could be banned from the Army’s Discord server. In fact, many users started bragging about how quickly they were being banned -- often by posting links or asking questions related to war crimes, and accusations of the US Army’s involvement in certain war crimes.This carried over to the US Army’s esports streaming channel on Twitch, where it appears that the Army set up certain banned words and phrases, including “war crimes,” leading at least one user -- esports personality Rod “Slasher” Breslau -- to try to get around that filter by typing “w4r cr1me” instead. This made it through and a few seconds later Breslau was banned from the chat by the Army’s esports player Green Beret Joshua “Strotnium” David, with David saying out loud during the stream “have a nice time getting banned, my dude.” Right before saying this David was mocking “internet keyboard monsters” for this kind of activity.When asked about this, the Army told Vice News that it considered the questions to be a form of harassment, and in violation of Twitch’s stated rules, even though it was the Army that was able to set the specific moderation rules on the account and choose who to ban:
When It Comes To Qualified Immunity, Where Your Rights Were Violated Matters More Than The Fact Your Rights Were Violated
Your rights are more protected in some areas of the country than in others. That's the conclusion reached by Reuters and its examination of qualified immunity cases across the country.Reuters' first report on qualified immunity showed we have the Supreme Court to blame for the high bar plaintiffs must leap to hold police officers accountable for rights violations. The doctrine was created by the court back in 1967. Subsequent decisions have made it easier for cops to escape judgment by limiting the lower courts' ability to hand down precedent on rights violations. Fewer precedential decisions means fewer cops "know" their violation of citizens' rights was wrong, leading to more dismissals at summary judgment where all an officer has to do is raise the qualified immunity defense. If no case is on point, the cop wins and the victim loses.But courts can interpret Supreme Court precedent differently, leading to some very noticeable variations in qualified immunity cases. This report shows the worst place to sue a police officer is the Fifth Circuit. This circuit covers Texas, Louisiana, and Mississippi. If you're a terrible cop, the best place to work is Texas, where the Appeals Court will side with you more often than in any other state.
Social Media Can Apply COVID-19 Policies To Reduce the Spread of Election Disinformation
With less than eighty days until Election Day and a pandemic surging across the country, disinformation continues to spread across social media platforms, posing dangers to public health, voting rights, and our democracy. Time is short and social media platforms need to ramp up their efforts to combat election disinformation and online voter suppression — just as they have with COVID-19 disinformation.Social media platforms have content moderation policies in place to counter both COVID-19 disinformation and election disinformation. However, platforms seem to be taking a more proactive approach to combating COVID-19 disinformation by building tools, spending significant resources, and most importantly, changing their content moderation policies to reflect the evolving nature of inaccurate information about the virus.To be clear, COVID-19 disinformation is still rapidly spreading online. However, the platforms’ actions on the pandemic demonstrate they can develop specific policies to address and remove this harmful content. Platforms’ efforts to mitigate election disinformation, on the other hand, are falling short, due to the significant gaps that remain in their content moderation policies. Platforms should seriously examine how their COVID-19 disinformation policies can apply to reducing the spread of election disinformation and online voter suppressionDisinformation on social media can spread in a variety of ways including (1) the lack of prioritizing authoritative sources of information and third-party fact-checking; (2) algorithmic amplification and targeting; and (3) platform self-monetization. Social media platforms have revised their content moderation policies on COVID-19 to address many of the ways disinformation can spread about the pandemic.For example, Facebook, Twitter, and YouTube all direct their users to authoritative sources of COVID-19 information. In addition, Facebook works with fact-checking organizations to review and rate pandemic-related content; YouTube utilizes fact-checking information panels; and Twitter is beginning to add fact-checked warning labels. Twitter has also taken the further step of expanding its definition on what it considers harmful content in order to capture and remove more inaccurate content related to the pandemic. To reduce the harms of algorithmic amplification, Facebook uses automated tools to downrank COVID-19 disinformation. Additionally, Facebook places restrictions on its advertisement policy to prevent the sale of fraudulent medical equipment and the platform prohibits ads that use exploitative tactics to create a panic over the pandemic as two methods for stopping the monetization of pandemic-related disinformation.These content moderation policies have resulted in social media platforms taking down significant amounts of COVID-19 disinformation including recent posts from President Trump. Again, disinformation about the pandemic persists on social media. But these actions show the willingness of platforms to take action and reduce the spread of this content.In comparison, social media platforms have not been as proactive in enforcing or developing new policies to respond to the spread of election disinformation. Platforms’ civic integrity policies are primarily limited to prohibiting inaccurate information about the processes of voting (e.g., misrepresentations about the dates and times people can vote). But even these limited policies are not being consistently enforced.For example, Twitter placed a warning label on one of Trump’s inaccurate tweets about mail-in-voting procedures but have taken no action on other similar tweets from the president. Further, social media platforms current policies may not be broad enough to take into account emerging voter suppression narratives about voter fraud and election rigging. Indeed, Trump has pushed inaccurate content about mail-in-voting across social media platforms, falsely claiming it will lead to voter fraud and election rigging. With many states expanding their mail-in-voting procedures due to the pandemic, Trump’s continued inaccurate attacks on this method of voting threaten to confuse and discourage eligible voters from casting their ballot.Platform content moderation policies also contain significant holes that bad actors continue to exploit to proliferate online voter suppression. For example, Facebook refuses to fact-check political ads even if they contain demonstrably false information that discourage people from voting. President Trump’s campaign has taken advantage of this by flooding the platform with hundreds of ads that spread disproven claims about voter fraud. Political ads with election disinformation can be algorithmically amplified or micro-targeted to specific communities to suppress their vote.Social media platforms including Facebook and Twitter have recently announced new policies they will be rolling out to fight online voter suppression. As outlined above, there are some lessons platforms can learn from their efforts in combatting COVID-19 disinformation.First, social media platforms should prioritize directing their users to authoritative sources of information when it comes to the election. Authoritative sources of information include state and local election officials. Second, platforms must consistently enforce and expand their content moderation policies as appropriate to remove election disinformation. Like their COVID-19 disinformation policies, platforms should build better tools and expand definitions of harmful content when it comes to online voter suppression. Finally, platforms must address the structural problems that allow bad actors to engage in online voter suppression tactics including algorithmic amplification and targeted advertisements.COVID-19 – as dangerous and terrifying an experience as it has been – has at least proven that when platforms want to step up their efforts to stop the spread of disinformation, they can. If we want authentic civic engagement and a healthy democracy that enables everyone’s voices to be heard, then we need digital platforms to ramp up their fight against online voter suppression, too. Our voices – and the voices of those in marginalized communities -- depend on it.Just as combating COVID-19 disinformation is important to our public health, reducing the spread of election disinformation is critical to authentic civic engagement and a healthy democracy. As part of our efforts to stop the spread of online voter suppression, Common Cause will continue to monitor social media platforms for election disinformation and encourages readers to report any inaccurate content to our tip line. At the end of the day, platforms themselves must step up their fight against new online voter suppression efforts.Yosef Getachew serves as the Media & Democracy Program Director for Common Cause. Prior to joining Common Cause, Yosef served as a Policy Fellow at Public Knowledge where he’s worked on a variety of technology and communications issues. His work has focused on broadband privacy, broadband access and affordability, and other consumer issues.
Judge Rejects Epic's Temporary Restraining Order Request For Fortnite (But Grants It For The Unreal Engine)
On Monday there was a... shall we say... contentious first hearing in the antitrust fight/contract negotiation between Apple and Epic over what Apple charges (and what it charges for...) in the iOS app store. The issue for the hearing was Epic's request for temporary restraining orders against Apple on two points: first, it wanted a restraining order that would force Apple to return Fortnite to the app store. Second, was a restraining order on Apple's plan to basically pull Epic's developer license for the wider Unreal Engine.As the judge made pretty clear would happen during the hearing, she rejected the TRO for Fortnite, but allowed it for the Unreal Engine. The shortest explanation: Apple removed Fortnite because of a move by Epic. So Epic was the cause of the removal. The threat to pull access for the Unreal Engine, however, seemed punitive in response to the lawsuit, and not for any legitimate reason.More specifically, for a TRO to issue, the key issue is irreparable harm (i.e., you can get one if you can show that without one there will be harm that can't be easily repaired through monetary or other sanctions). But here, as the court notes, Epic, not Apple, created the first mess, and so it can fix it by complying with the contract. So there is no irreparable harm, since it can solve the issue. The opposite is true of the Unreal Engine, though:
Daily Deal: Fluent City 10-Week Language Learning Course
Fluent City is an innovative language training organization, offering instruction to individuals, groups, and businesses in 11 different languages. Their online group language classes are small, social, and conversation-based so you're ready to strike up a conversation, even in the real world. With expert instructors from all over the world and the latest language technology, Fluent City builds engaging and highly relevant lesson plans and learning activities. Get fluent faster through classes that emphasize real human connection. It's on sale for $300.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
CBP Is Still Buying Location Data From A Company Currently Being Investigated By Congress
Earlier this year, the Wall Street Journal revealed that ICE and CBP were buying location data from third-party data brokers -- something that seemed like a calculated move to dodge the requirements of the Supreme Court's Carpenter decision. There's a warrant requirement for historical cell site location data, but the two agencies appear to believe gathering tons of "pseudonymized" data to "help identify and locate" undocumented immigrants isn't a Fourth Amendment problem.At this point, they're probably right. They may not be correct but they don't have court precedent telling them they can't do this. Not yet. So, they're doing it. It may not be immediately invasive as approaching a cell service provider for weeks or months of location data related to a single person, but this concerted effort to avoid running anything by a judge suggests even the DHS feels obtaining data this way is quasi-legal at best.In late June, the House Committee on Oversight and Reform opened an investigation into Venntel's sale of location data to ICE and CBP. The Committee asked Venntel to hand over information about its data sales, whether or not it obtained consent from phone users to gather this data, and whether it applied any restrictions to the use of data by government agencies. The answers to the Committee's questions were due in early July. So far, Venntel has yet to respond.Venntel's business hasn't slowed despite being investigated by Congress. Joseph Cox reports for Motherboard that CBP has just signed another deal with the data broker.
Surprise: Report Claims Facebook Has Been Driving White House TikTok Animosity
As we've been noting, Trump's executive order attempting to ban TikTok is not only legally unsound, it's not coherent policy. Chinese state hackers, with their unlimited budgets, can simply obtain this (and far greater) data from any of the thousands of companies in the existing, unaccountable international adtech sector, our poorly secured communications networks, or the millions of Chinese-made IOT devices or "smart" products Americans attach to home and business networks with reckless abandon. The U.S. has no privacy law and is a mess on the privacy and security fronts. We're an easy mark and TikTok is the very least of our problems.With that as backdrop, it's clear that most of the biggest TikTok pearl clutchers in the Trump administration couldn't care less about actual U.S. consumer security and privacy. After all, this is the same administration that refuses to shore up election security, strictly opposes even the most basic of privacy laws for the internet era, and has been working tirelessly to erode essential security protections like encryption. If the U.S. was actually interested in shoring up U.S. security and privacy, we'd craft coherent, over-arching policies to address all of our security and privacy problems, not just those that originate in China.Trump's real motivations for the ban lie elsewhere. As a delusional narcissist, some of his motivation is the attempt to portray himself as a savvy businessman, extracting leverage for a trade war with China he clearly doesn't understand isn't working, and is actually harming Americans. Spreading additional xenophobia as a party platform is also an obvious goal. But it's also becoming increasingly clear that at least some of the recent TikTok animosity is originating with Trump's newfound BFFs over at Facebook, who've been hammering Trump with claims that Chinese platforms "don’t share Facebook’s commitment to freedom of expression," and "represent a risk to American values and technological supremacy.":
Virtual Reconstruction Of Ancient Temple Destroyed By ISIS Is Another Reason To Put Your Holiday Photos Into The Public Domain
The Syrian civil war has led to great human suffering, with hundreds of thousands killed, and millions displaced. Another victim has been the region's rich archaeological heritage. Many of the most important sites have been seriously and intentionally damaged by the Islamic State of Iraq and Syria (ISIS). For example, the Temple of Bel, regarded as among the best preserved at the ancient city of Palmyra, was almost completely razed to the ground. In the past, more than 150,000 tourists visited the site each year. Like most tourists, many of them took photos of the Temple of Bel. The UC San Diego Library's Digital Media Lab had the idea of taking some of those photos, with their many different viewpoints, and to combine them using AI techniques into a detailed 3D reconstruction of the temple:
Activision Deletes And Replaces 'Call Of Duty' Trailer Worldwide Over 1 Second That Hurt China's Feelings
While China-bashing is all the rage right now (much of it deserved given the country's abhorrent human rights practices), it's sort of amazing what a difference a year makes. While the current focus of ire towards the Chinese government seems focused on the COVID-19 pandemic and a few mobile dance apps, never mind the fully embedded nature of Chinese-manufactured technology in use every day in the West, late 2019 was all about China's translucent skin. Much of that had to do with China's inching towards a slow takeover of Hong Kong and how several corporate interests in the West reacted to it. Does anyone else remember when our discussion about China was dominated by stories dealing with Blizzard banning Hearthstone players for supporting Hong Kong and American professional sports leagues looking like cowards in the face of a huge economic market?Yeah, me neither. But with all that is going on the world and all of the criticism, deserved or otherwise, being lobbed at the Chinese government, it's worth pointing out that the problems of last year are still going on. And, while Google most recently took something of a stand against the aggression on Hong Kong specifically, other companies are still bowing to China's thin-skin in heavy-handed ways. The latest example of this is an admittedly relatively trivial attempt by Activision to kneel at the altar of Chinese historical censorship.
Arizona State University Sues Facebook With Bogus Trademark Claim To Try To Stop COVID Parties Account
Let's start this one by noting that "COVID parties" are an incredibly dumb and insanely dangerous idea. A few people have suggested them as a way to expose a bunch of people to COVID-19 in the belief that if it's mostly young and healthy people, they can become immune by first suffering through having the disease, with a lower likelihood of dying. Of course, this leaves out the very real possibility of other permanent damage that getting COVID-19 might have and (much worse) the wider impact on other people -- including those who might catch COVID-19 from someone who got it at one of these "parties." It's also not at all clear how widespread the idea of COVID parties are. There have been reports of them, but most of them have been shown to be urban legends and hoaxes.Whether or not COVID parties are actually real or not, some jackass decided to set up an Instagram account called "asu_covid.parties," supposedly to promote such parties among students of Arizona State University as they return to campus. The account (incorrectly and dangerously) claimed that COVID-19 is "a big fat hoax." Of course, if it were a hoax, why would you organize parties to infect people? Logic is not apparently a strong suit. Arizona State University appears to believe that the account was created by someone (or some people) in Russia to "sow confusion and conflict." And that may be true.
Techdirt Podcast Episode 253: Post-Pandemic Tech
The COVID-19 pandemic is far from over, and as it rages on we're learning a lot about technology's role in a situation like this — but it's also worth looking forward, and thinking about how tech will be involved in the process of repairing and recovering from the damage the pandemic has done. This week, we're joined by TechNYC executive director Julie Samuels to discuss the role of technology in a post-pandemic world.Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
New Face Masks: The First & Fourth Emojiments
Get your First & Fourth Emojiment gear in the Techdirt store on Threadless »We've got two new additions to our line of face masks in the Techdirt store on Threadless: our popular emoji-fied versions of the First and Fourth Amendments. We've considered adding more amendments to this line, but not all of them translate so easily — so for now, you can enjoy these two extremely important ones in face mask form!All the face masks are available in two versions (premium and standard) as well as youth sizes. And of course, the designs are also available on a wide variety of other products including t-shirts, hoodies, mugs, buttons, and more! Check out the Techdirt store on Threadless and order yours today.
Law Enforcement Training: People Saying 'I Can't Breathe' Are Just Suffering From 'Excited Delirium'
Daily Deal: Taskolly Project Manager
Taskolly is an easy, flexible, and visual way to manage your projects and organize anything. It's software that will help you and your team manage work and tasks so you can increase your productivity. Easily plan, collaborate, organize, and deliver projects of all sizes, on time, by using a single project planning software equipped with all of the right tools, all in one place. There are three tiers on sale: Pro Plan (5 users) for $39, Business Plan (10 users) for $59, and Enterprise Plan (unlimited users) for $149.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Massachusetts Top Court Says Cops Need Warrants To Engage In Long-Term Video Surveillance Of People's Houses
Is a police camera aimed at a publicly-viewable area Constitutional? That's a question courts have had to answer periodically. In most cases, the answer appears to be "no." Long-term surveillance -- even of a publicly-viewable area -- is a government intrusion into private citizens' lives. This sort of intrusion requires a warrant and sufficient probable cause.A ruling by Massachusetts Supreme Judicial Court doesn't quite reach the Fourth Amendment but does find the seven months of surveillance by utility pole mounted cameras violates the state's Constitution. The long-term surveillance of two residences resulted in multiple motions to suppress by the defendants. None of these have been granted but the SJC has reversed the lower court's dismissal of the suppression attempts. (via FourthAmendment.com)Here's the crucial part of the ruling [PDF], which notes the court isn't going to go federal with this, leaving the Fourth Amendment question open.
Boys And Girls Club Backtracks After Folks Ask Why It's Helping A Cable Monopoly Lobby The FCC
Last month we noted how the Boys and Girls Club was one of several organizations cable giant Charter (Spectrum) was using to lobby the FCC in a bid to kill off merger conditions affixed to its 2015 merger with Time Warner Cable. Many of those conditions actively protect consumers from monopoly price gouging (a 7 year temporary moratorium on arbitrary and unnecessary usage caps, for example). Other conditions worked to expand broadband into less affluent areas. Despite the conditions actually helping, you know, boys and girls... the club's letter opposed them.In a letter to the FCC, the Boys and Girls Club insisted that a recent $5,000 donation by Charter to the organization helped it weather the COVID-19 storm, and that "lifting these conditions will level the playing field for Charter while having zero impact on the online video marketplace." But after activist and reporter Phil Dampier pointed out that wasn't true (garnering local press attention), both the Boys and Girls Club and Charter appear to have quickly pivoted to damage control mode.In a statement to a Rochester, New York NBC affiliate, the Club acknowledges that after getting a big donation they signed off on a letter to the FCC that was written by Charter -- without reading it:
Consumer Reports Study Shows Many 'Smart' Doorbells Are Dumb, Lack Basic Security
Like most internet of broken things products, we've noted how "smart" devices quite often aren't all that smart. More than a few times we've written about smart lock consumers getting locked out of their own homes without much recourse. Other times we've noted how the devices simply aren't that secure, with one study finding that 12 of 16 smart locks they tested could be relatively easily hacked thanks to flimsy security standards, something that's the primary feature of many internet of broken things devices."Smart" doorbells aren't much better. A new study by Consumer Reports studied 24 different popular smart doorbell brands, and found substantial security problems with at least five of the models. Many of these flaws exposed user account information, WiFi network information, or, even in some cases, user passwords. Consumer Reports avoids getting too specific as to avoid advertising the flaws while vendors try to fix them:
Documents Show Law Enforcement Agencies Are Still Throwing Tax Dollars At Junk Science
Recently, 269 gigabytes of internal law enforcement documents were liberated by hacker collective Anonymous -- and released by transparency activists Distributed Denial of Secrets (DDoSecrets). The trove contained plenty of sensitive law enforcement data, but also a lot of stuff law enforcement considers "sensitive" just because it doesn't want to let the public know what it's been spending their tax dollars on.The documents highlighted in this report by Jordan Smith of The Intercept show law enforcement agencies are spending thousands of dollars to maximize the Dunning-Kruger effect. People are still peddling junk science and discredited techniques to law enforcement agencies and We the People are picking up the tab.
Content Moderation And Human Nature
It should go without saying that communication technologies don’t conjure up unfathomable evils all by themselves. They are a convenience-enhancer, a conduit, and a magnifying lens amplifying something that’s already there: our deeply flawed humanity. Try as we might to tame it (and boy have we tried), human nature will always rear its ugly head. Debates about governing these technologies should start by making the inherent tradeoffs more explicit.InstitutionsFirst, a little philosophizing. From the social contract onwards, a significant amount of resources have been allocated to attempting to subdue human nature’s predilection for self-preservation at all costs. Modern society is geared towards improving the human condition by striving to unlearn — or at least overpower — our more primitive responses.One such attempt is the creation of institutions, with norms, rules, cultures and, on paper, inherently stronger principles than those rooted deep inside people.It’s difficult to find ideologies that don’t allow for some need for institutions. Even the most ardent of free market capitalists acquiesce to the — limited, in their mindset — benefits of certain institutions. Beyond order and a sense of impartiality, institutions help minimize humans’ unchecked power in consequential choices that can impact wider society.One ideal posits that institutions (corporations, parties, governments) given unfettered control over society could rid us of the aspects of our humanity that we’ve so intently tried to escape, bringing forth prosperity, equality, innovation, and progress. The fundamental flaw in that reasoning is that institutions are still intrinsically connected to humanity; created, implemented, and staffed by fallible human beings.However strict the boundaries in which humans are expected to operate, the potential for partial or even total capture is very high. The boundaries are rarely entirely solid, and even if they were, humans always have the option to not comply. Bucking the system is not just an anomaly, it’s revered in a large portion of non-totalitarian regimes as a sign of independence, strong individuality, and as a characteristic of those lauded as mavericks.The power of institutional norms tasked with guarding against the worst of what humans can offer is proven to be useless when challenged by people for whom self-preservation is paramount. A current and facile example is the rise to power of Donald Trump and his relentless destruction of society-defining unwritten rules.Even without challenging the institution, a turn towards self-indulgence is easily achievable, forging a path to a reshaping in its image. The most obvious example is that of communism, wherein the lofty goal of equality is operationalized through a party-state apparatus to ostensibly distribute equally the spoils of society’s labor. As history has shown, this is contingent on the sadly unlikely situation wherein all those populating institutions are genuinely altruistic. Invariably, the best-case scenario dissipates, if it ever materialized, and inequality deepens — the opposite of the desired goal.This is not a tacit endorsement of a rule-less, institution-less dystopia simply because rules and institutions are not adept at a practically impossible task. Instead, this should be read as a cautionary tale for overextending critical aspects of society and treating them as panacea, rather than a suitable and mostly successful palliative.Artificial IntelligenceArmed with the continuous failure of institutions to overcome human nature, you’d think we would stop trying to remove our imperfect selves from the equation.But what we’ve seen for more than a decade now has been technology that directly and distinctly promises to remove our worst impulses, if not humans entirely, from thinking, acting, or doing practically anything of consequence. AI, the ultimate and literal deus ex machina, is advertised as a solution for a large number of much smaller concerns. Fundamentally, its solution to these problems is ostensibly removing the human element.Years of research, experiments, blunders, mistakes and downright evil deeds have led us to safely conclude that artificial intelligence is as successful at eliminating the imperfect human as the “you wouldn’t steal a car” anti-piracy campaign was at stopping copyright infringement. This is not to denigrate the important and beneficial work scientists and engineers have put into building intelligent automation tasked with solving complex problems.Technology, and artificial intelligence in particular, is created, run and maintained by human beings with perspectives, goals, and inherent biases. Just like institutions, once a glimpse of positive change or success is evident, we extrapolate it far beyond its limits and task it with the unachievable and unenviable goal of fixing humanity — by removing it from the equation.PlatformsCommunication technology is not directly tasked with solving society, it simply is meant as a tool to connect us all. Much like AI, it has seemingly elegant solutions for messy problems. It’s easy to see that thanks to tech platforms, be they bulletin boards or TikTok, distance becomes trivial in maintaining connection. Community can be built and fostered online, otherwise marginalized voices can be heard, and businesses can be set up and grow digitally. Even loneliness can be alleviated.With such a slew of real and potential benefits, it’s no wonder that we started to ascribe them with increasingly more consequential roles for society; roles these technologies were never built for and are far beyond their technical and ethical capabilities.The Arab Spring in the early 2010s wasn't just a liberation movement by oppressed and energized populations. It was also an opportunity for free PR for now tech-giants Twitter and Facebook, as various outlets and pundits branded revolutions with their names. It didn't help that CEOs and tech executives seized on this narrative and, in typical Silicon Valley fashion, took to promising things akin to a politician trying to get elected.When you set the bar that high, expectations understandably follow. The aura of tech solutionism implies such earth-shattering advancements as ordinary.Nearly everyone can picture the potential good for society these technologies can do. And while we may all believe in that potential, the reality is that, so far, communication technologies have mostly provided convenience. Sometimes this convenience is in fact live-saving, but mostly it’s just an added benefit.Convenience doesn’t alter our core. It doesn’t magically make us better humans or create entirely different societies. It simply lifts a few barriers from our path. This article may be seen as an attempt to minimize the perceived role of technology in society, in order to subsequently deny it and its makers any blame for how society uses it. But that is not what I am arguing.An honest debate about responsibility has to fundamentally start with a clear understanding of the actual task something accomplishes, the perceived task others attribute to it, and its societal and historical context. A technology that provides convenience should not be fundamental to the functioning of a society. Convenience can easily become so commonplace that it ceases to be an added benefit but an integral part of life where the prospect of it being taken away is met with screams of bloody murder.Responsibility has to be assigned to the makers, maintainers and users of communication technology, by examining which barriers are being lifted and why. There is plenty of responsibility there to be had, and I am involved in a couple of projects that try to untangle this complex mess. However, these platforms are not the reason for the negative parts of life, they are merely the conduit.Yes, a sentient conduit can tighten or loosen its grip, divert, amplify, temporarily block messages, but it isn’t the originator of those messages, or of the intent behind it. It can surely be extremely inviting for messages of hate and division, maybe because of business models, maybe because of engineering decisions, or maybe simply because growth and scale never actually happened in a proper way. But that hate and division is endemic to human nature, and to assume that platforms can do what institutions have persistently failed to do, namely entirely eradicate it, is nonsensical.RegulationIt is clear that platforms, reaching the size and ubiquity that they have, require updated and smart regulations in order to properly balance their benefits and the risks. But the push (and counter-push) to regulate has to start from a perspective that understands both fundamental leaps: platforms are to human nature what section 230 (or any other national-level intermediary liability law) is to the First Amendment (or any national level text that inscribes the social consensus on free speech).If your issue is with hate and hate speech, the main thing you have to contend with are human nature and the First Amendment, not just the platforms and section 230. Without a doubt, both the platforms and section 230 are choices and explicit actions built on top of the other two, and are not fundamentally the only or best form of what they could be.But a lot of the issues that bubble up within the content moderation and intermediary liability space come from a concern over the boundaries. That concern is entirely related to the broader contexts rather than the platforms or the specific legislation.Regulating platforms has to start from the understanding that tradeoffs, most of which are cultural in nature, are inevitable. To be clear: there is no way to completely stop evil from happening on these platforms without making them useless.If we were to simply ignore hate speech, we’d eliminate convenience and in some instances invalidate the very existence of these platforms. That should not be an issue if these platforms were still seen as simple conveyors of convenience, but they are currently being seen as much more than that.Tech executives and CEOs have moved into the fascinating space wherein they have to protect their market power to assuage their shareholders, treat their products as mind-meltingly amazing to gain and keep users, yet imply their role in society is transient and insignificant in order to mollify policy-makers all at the same time.The convenience afforded by these technologies is allowing nefarious actors to cause substantial harm to a substantial number of people. Some users get death threats, or even have their life end tragically because of interactions on these platforms. Others will have their most private information or documents exposed, or experience sexual abuse or trauma through a variety of ways.Unfortunately, these things happen in the offline world as well, and they are fundamentally predicated on the regulatory/institutional context and the tools that allow them to manifest. The tools are not off the hook. Their propensity to not minimize harm, online and off, are due for important conversations. But they are not the cause. They are the conduit.Thus, the ultimate goal of “platforms existing without hate or violence” is very sadly not realistic. Neither are tradeoffs such as being ok with stripping fundamental rights in exchange for a safer environment, or being ok with some people suffering immense trauma and pain simply because one believes in the concept of open speech.Maybe the solution is to not have these platforms at all, or ask them to change substantially. or maybe it’s to calibrate our expectations, or maybe yet, to address the underlying issues in our society. Once we see what the boundaries truly are, any debate becomes infinitely more productive.This article is not advancing any new or groundbreaking ideas. What it does is identify crucial and seemingly misunderstood pieces of the subtext and spell it out. Sadly, the fact that these more or less evident issues needed to be said in plain text should be the biggest take-away.As a qualitative researcher, I learned that there is no way to “de-bias” my work. Trying to remove myself from the equation results in a bland “view from nowhere” that is ignorant of the underlying power dynamics and inherent mechanisms of whatever I am studying. However, that doesn’t mean we take off our glasses when trying to see for fear of the glasses influencing what we see, because that would actually make us blind. We remedy that by acknowledging our glasses as well.A communication platform (company, tech, product) that doesn’t have inherent biases is impossible. But that shouldn’t mean that we can’t try to ask it to be better, either through regulation, collaboration or hostile action. We just have to be cognizant of the place we’re standing when asking, the context, potential consequences and as this piece hopefully shows, what it can’t actually do.The conversation surrounding platform governance would benefit immensely from these tradeoffs being made explicit. It would certainly dial down the rhetoric and (genuine) visceral attitudes towards debate as it would force those directly involved or invested in one outcome to carefully assess the context and general tradeoffs.David Morar, PhD is an academic with the mind of a practitioner and currently a Fellow at the Digital Interests Lab and a Visiting Scholar at GWU’s Elliott School of International Affairs.
Appeals Court: City Employee's Horrific Facebook Posts About Tamir Rice Shooting Were Likely Protected Speech
Just your periodic reminder that the First Amendment protects some pretty hideous speech. And it does so even when uttered by public servants. Caveats apply, but the Sixth Circuit Court of Appeals [PDF] has overturned a lower court dismissal of a Cleveland EMS captain, who made the following comment several months after Cleveland police officers killed 12-year-old Tamir Rice as he played with a toy gun in a local park.
Daily Deal: The All-In-One Mastering Organization Bundle
The All-In-One Mastering Organization Bundle has 5 courses to help you become more organized and efficient. You'll learn how to organize all your digital files into a single inbox-based system, how to organize your ideas into a hierarchy, how to categorize each object in your home/apartment/office/vehicle into one of the categories from the "One System Framework," and more. It's on sale for $30.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Secret Service Latest To Use Data Brokers To Dodge Warrant Requirements For Cell Site Location Data
Another federal law enforcement agency has figured out a way to dodge warrant requirements for historical cell site location data. The Supreme Court's Carpenter decision said these records were covered by the Fourth Amendment. But rather than comply with the ruling, agencies like the CBP and ICE are buying location data in bulk from private companies that collect this data, rather than approach service providers with warrants.These agencies argue they aren't violating the Constitution because the data is "pseudonymized" and doesn't specifically target any single person. But even cops using "reverse" warrants are still using warrants to gather this data. Federal agencies apparently can't be bothered with this nicety, preferring to collect information in bulk and work backwards to whatever it is they're looking for.The Secret Service is the latest federal agency to buy location data from Locate X -- one of the companies already providing cell site location data to CBP and ICE. Joseph Cox has the details for Motherboard.
VoLTE Flaw Lets A Hacker Spy On Encrypted Communications For A Measly $7,000
As we've noted, much of the hysteria surrounding TikTok isn't based on anything close to consistent outrage. As in, many of the folks freaking out about a teen dancing app were nowhere to be found when U.S. wireless carriers were found to be selling access to your location data to any random idiot. Most of the folks pearl clutching about TikTok have opposed election security funding or even the most basic of privacy rules. The SS7 flaw that makes most wireless networks vulnerable to eavesdropping ? The lack of any security or privacy safeguards in the internet of things (IOT) space?Which is all a long way of saying: if you're going to lose sleep over TikTok, you'll be shocked to learn there's an ocean of issues that folks are paying absolutely no attention to. Or, to put it another way, TikTok is probably the very least of a long list of problems related to keeping U.S. data secure.The latest case in point: a report last week noted how with around $7,000 worth of gear, a marginally competent person could eavesdrop on voice over LTE (VoLTE) communications, even though these transmissions are purportedly encrypted:
Funniest/Most Insightful Comments Of The Week At Techdirt
This week, our first place winner is an anonymous comment summing up how there are no good guys in the Epic/Apple showdown:
This Week In Techdirt History: August 16th - 22nd
Five Years AgoThis week in 2015, new leaks confirmed what we suspected about AT&T's cozy relationship with the NSA, which was especially concerning given the company's long history of fraudulent and abusive behavior, and the fact that the NSA seemed to think telco partners freed it from the constraints of the Fourth Amendment. The leak also revealed that the agency was misleading at best about how many cellphone records it could access.Ten Years AgoThis week in 2010, Peter Sunde gave a fascinating presentation on the history of The Pirate Bay, while we were emphasizing that record labels can still have a role in music if they embrace the ways that role is changing, and a new comprehensive graphic aptly demonstrated just how insane the music licensing world is. The trend of established musicians and industry folk using apocalyptic language to describe the impact of the internet continued, with rants from U2's manager and John Mellencamp (who compared the internet to the atomic bomb).Fifteen Years AgoThis week in 2005, we took a look at how the DMCA was not just a failure but a completely avoidable one with flaws that were obvious from the start, while we were pleased to see one person finally ready to fight back against the RIAA's lawsuits. The mobile music market was on the rise with Japan blazing the trail (and trying to debunk claims that this was due to a lack of wired connections), but we wondered if the market might be killed by aggressive use of DRM. Mobile games were also on the rise, but the biggest and most important development was one we (like many people) underestimated when it happened: Google bought Android, leading to some speculation that they might be building a mobile OS which we said "seems unlikely".
Apple Goes In Even Harder Against Prepear Over Non-Apple Logo
A couple of weeks ago, we wrote about Apple opposing the trademark for Prepear, a recipe sharing phone app, over its pear logo. The whole thing was completely absurd. The logos don't look anything alike, the color schemes and artistic styles are different, and also a pear is not an apple. I likened the whole thing to those absurd CNN commercials, which should give you some idea of just how dumb this whole thing was. So, thanks to this idiocy being exposed and the public backlash, Apple finally realized the error of its ways and backed off the opposition.Just kidding. Apple, in fact, has decided to double down in opposing Prepear's trademarks, now going after the Canadian trademark registration for the logo as well.
Content Moderation Case Study: Nextdoor Faces Criticism From Volunteer Moderators Over Its Support Of Black Lives Matter (June 2020)
Summary:Nextdoor is the local “neighborhood-focused” social network, which allows for hyper-local communication within a neighborhood. The system works by having volunteer moderators from each neighborhood, known as “leads.” For many years, Nextdoor has faced accusations of perpetuating racial stereotyping from people using the platform to report sightings of black men and women in their neighborhood as somehow “suspicious.”
Content Moderation Knowledge Sharing Shouldn't Be A Backdoor To Cross-Platform Censorship
Ten thousand moderators at YouTube. Fifteen thousand moderators at Facebook. Billions of users, millions of decisions a day. These are the kinds of numbers that dominate most discussions of content moderation today. But we should also be talking about 10, 5, or even 1: the numbers of moderators at sites like Automattic (Wordpress), Pinterest, Medium, and JustPasteIt—sites that host millions of user-generated posts but have far fewer resources than the social media giants.There are a plethora of smaller services on the web that host videos, images, blogs, discussion fora, product reviews, comments sections, and private file storage. And they face many of the same difficult decisions about the user-generated content (UGC) they host, be it removing child sexual abuse material (CSAM), fighting terrorist abuse of their services, addressing hate speech and harassment, or responding to allegations of copyright infringement. While they may not see the same scale of abuse that Facebook or YouTube does, they also have vastly smaller teams. Even Twitter, often spoken of in the same breath as a “social media giant,” has an order of magnitude fewer moderators at around 1,500.One response to this resource disparity has been to focus on knowledge and technology sharing across different sites. Smaller sites, the theory goes, can benefit from the lessons learned (and the R&D dollars spent) by the biggest companies as they’ve tried to tackle the practical challenges of content moderation. These challenges include both responding to illegal material and enforcing content policies that govern lawful-but-awful (and mere lawful-but-off-topic) posts.Some of the earliest efforts at cross-platform information-sharing tackled spam and malware such as the Mail Abuse Prevention System (MAPS) — which maintains blacklists of IP addresses associated with sending spam. Employees at different companies have also informally shared information about emerging trends and threats, and the recently launched Trust & Safety Professional Association is intended to provide people working in content moderation with access to “best practices” and “knowledge sharing” across the field.There have also been organized efforts to share specific technical approaches to blocking content across different services, namely, hash-matching tools that enable an operator to compare uploaded files to a pre-existing list of content. Microsoft, for example, made its PhotoDNA tool freely available to other sites to use in detecting previously reported images of CSAM. Facebook adopted the tool in May 2011, and by 2016 it was being used by over 50 companies.Hash-sharing also sits at the center of the Global Internet Forum to Counter Terrorism (GIFCT), an industry-led initiative that includes knowledge-sharing and capacity-building across the industry as one of its 4 main goals. GIFCT works with Tech Against Terrorism, a public-private partnership launched by the UN Counter-Terrrorism Executive Directorate, to “shar[e] best practices and tools between the GIFCT companies and small tech companies and startups.” Thirteen companies (including GIFCT founding companies Facebook, Google, Microsoft, and Twitter) now participate in the hash-sharing consortium.There are many potential upsides to sharing tools, techniques, and information about threats across different sites. Content moderation is still a relatively new field, and it requires content hosts to consider an enormous range of issues, from the unimaginably atrocious to the benignly absurd. Smaller sites face resource constraints in the number of staff they can devote to moderation, and thus in the range of language fluency, subject matter expertise, and cultural backgrounds that they can apply to the task. They may not have access to — or the resources to develop — technology that can facilitate moderation.When people who work in moderation share their best practices, and especially their failures, it can help small moderation teams avoid pitfalls and prevent abuse on their sites. And cross-site information-sharing is likely essential to combating cross-site abuse. As scholar evelyn douek discusses (with a strong note of caution) in her Content Cartels paper, there’s currently a focus among major services in sharing information about “coordinated inauthentic behavior” and election interference.There are also potential downsides to sites coordinating their approaches to content moderation. If sites are sharing their practices for defining prohibited content, it risks creating a de facto standard of acceptable speech across the Internet. This undermines site operators’ ability to set the specific content standards that best enable their communities to thrive — one of the key ways that the Internet can support people’s freedom of expression. And company-to-company technology transfer can give smaller players a leg up, but if that technology comes with a specific definition of “acceptable speech” baked in, it can end up homogenizing the speech available online.Cross-site knowledge-sharing could also suppress the diversity of approaches to content moderation, especially if knowledge-sharing is viewed as a one-way street, from giant companies to small ones. Smaller services can and do experiment with different ways of grappling with UGC that don’t necessarily rely on a centralized content moderation team, such as Reddit’s moderation powers for subreddits, Wikipedia’s extensive community-run moderation system, or Periscope’s use of “juries” of users to help moderate comments on live video streams. And differences in the business model and core functionality of a site can significantly affect the kind of moderation that actually works for them.There’s also the risk that policymakers will take nascent “industry best practices” and convert them into new legal mandates. That risk is especially high in the current legislative environment, as policymakers on both sides of the Atlantic are actively debating all sorts of revisions and additions to intermediary liability frameworks.Early versions of the EU’s Terrorist Content Regulation, for example, would have required intermediaries to adopt “proactive measures” to detect and remove terrorist propaganda, and pointed to the GIFCT’s hash database as an example of what that could look like (CDT joined a coalition of 16 human rights organizations recently in highlighting a number of concerns about the structure of GIFCT and the opacity of the hash database). And the EARN-IT Act in the US is aimed at effectively requiring intermediaries to use tools like PhotoDNA—and not to implement end-to-end encryption.Potential policymaker overreach is not a reason for content moderators to stop talking to and learning from each other. But it does mean that knowledge-sharing initiatives, especially formalized ones like the GIFCT, need to be attuned to the risks of cross-site censorship and eliminating diversity among online fora. These initiatives should proceed with a clear articulation of what they are able to accomplish (useful exchange of problem-solving strategies, issue-spotting, and instructive failures) and also what they aren’t (creating one standard for prohibited — much less illegal— speech that can be operationalized across the entire Internet).Crucially, this information exchange needs to be a two-way street. The resource constraints faced by smaller platforms can also lead to innovative ways to tackle abuse and specific techniques that work well for specific communities and use-cases. Different approaches should be explored and examined for their merit, not viewed with suspicion as a deviation from the “standard” way of moderating. Any recommendations and best practices should be flexible enough to be incorporated into different services’ unique approaches to content moderation, rather than act as a forcing function to standardize towards one top-down, centralized model. As much as there is to be gained from sharing knowledge, insights, and technology across different services, there’s no-one-size-fits-all approach to content moderation.Emma Llansó is the Director of CDT’s Free Expression Project, which works to promote law and policy that support Internet users’ free expression rights in the United States and around the world. Emma also serves on the Board of the Global Network Initiative, a multistakeholder organization that works to advance individuals’ privacy and free expression rights in the ICT sector around the world. She is also a member of the multistakeholder Freedom Online Coalition Advisory Network, which provides advice to FOC member governments aimed at advancing human rights online.
Judge Recommends Copyright Troll Richard Liebowitz Be Removed From Roll Of The Court For Misconduct In Default Judgment Case
Would you believe it? Copyright troll Richard Liebowitz is in trouble yet again. And yes, we just had a different article about him yesterday, but it's tough to keep up with all of young Liebowitz's court troubles. The latest is that a judge has sanctioned Liebowitz and recommended he be removed from the roll of the court in the Northern District of NY.But here's the amazing thing: this is all happening in a case where they're trying to get damages in a default judgment case. As we noted just last week, it's quite rare for a court to do anything other than rubber stamp a default judgment request (what usually happens when the defendant doesn't show up in court and ignores a lawsuit). Yet, last week we saw a judge deny a default judgment in a different copyright trolling case, involving Malibu Media. And here, Richard Liebowitz has managed to not only lose a case in which the court clerk had already entered a default, but to get sanctioned and possibly kicked off the rolls of the court. That's... astounding.The judge, Lawrence Kahn, is clearly having none of Liebowitz's usual bullshit. The ruling cites many of Liebowitz's other bad cases. Ostensibly, at this point the issue is that Liebowitz took the default and wanted to have the court order statutory damages against the defendant (Buckingham Brothers LLC), but instead the court just slams Liebowitz for a wide variety of issues. First, the court points out that despite the default, the original legal pleading was insufficient for statutory damages (and for attorney's fees) in part because, in typical Liebowitz fashion, he tried to hide stuff from the court. In particular, Liebowitz didn't allege the date of infringement or the date of the copyright registration. This is important, because you can't get statutory damages if the infringement is before the registration. This is an issue that Liebowitz has been known to fudge in the past. And here, the failure to plead those key points dooms the request for statutory damages and attorneys fees here:
Daily Deal: Naztech Ultimate Power Station
Featuring a sophisticated wireless charger, a 5 USB charging hub, and an ultra-compact portable battery, the Naztech Ultimate Power Station is your all-in-one charging solution. Charge up to 6 power-hungry devices at the same time from a single AC wall outlet. With 50 watts of rapid charging power, the Ultimate is the perfect and practical solution for homes and offices with limited outlets and multiple devices that need high-speed charging. It's on sale for $50.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The Supreme Court's Failure To Protect The Right To Assemble Has Led Directly To Violence Against Protesters
It appears the Supreme Court is unwilling to address a another problem it created.The first major problem created by the Court has been discussed here quite frequently. Qualified immunity was created by the Supreme Court in 1967 as a way to excuse rash decisions by law enforcement if undertaken in "good faith." Since then, it has only gotten worse. Fifteen years later, the Supreme Court added another factor: a violation of rights must be "clearly established" as a violation before a public servant can be held accountable for violating the right. Further decisions moved courts away from determining whether or not a rights violation took place, relying instead on steadily-decreasing precedent showing this violation was "clearly established."The Supreme Court continues to dodge qualified immunity cases that might make it rethink the leeway it has granted to abusive cops. Plenty of people have taken note of this, including federal court judges.But that's not the only way the general public is being screwed by SCOTUS. As Kia Rahnama points out for Politico, the right to freely assemble -- long-considered an integral part of the First Amendment -- continues to be narrowed by the nation's top court. As violence against demonstrators increases in response to ongoing protests over abusive policing (enabled by qualified immunity's mission creep), those participating in the violence feel pretty secure in the fact they'll never have to answer for the rights violations.
Bizarre Court Ruling Helps Cable Broadband Monopoly Charter Tap Dance Around Merger Conditions
Eager to impose higher rates on its mostly captive customers, Charter Communications (Spectrum) has been lobbying the FCC to kill merger conditions affixed to its 2015 merger with Time Warner Cable. The conditions, among other things, prohibited Charter from imposing nonsensical broadband caps and overage fees, or engaging in the kind of interconnection shenanigans you might recall caused Verizon customers' Netflix streams to slow to a crawl back in 2015. The conditions also involved some fairly modest broadband expansion requirements Charter initially tried to lie their way out of.But with the GOP having neutered FCC authority over broadband providers (including the axing of net neutrality rules), Charter obviously is eager to take full advantage. So on one hand, they've been engaged in some fairly dodgy lobbying of the FCC to scuttle the conditions, which already had a seven year sunset provision (they expire in 2 years anyway). On the other hand, the telecom-backed Competitive Enterprise Institute (CEI) took a different tack, and filed suit against the conditions, somehow convincing four Charter customers to sue under the argument the conditions (not the merger) raised consumer prices.This being America, the telecom-backed think tank last week scored a favorable ruling thanks to the US Court of Appeals for the District of Columbia Circuit. In its ruling (pdf), the court completely bought into the CEI's arguments that conditions crafted by consumer advocates, aimed at protecting consumers, somehow hurt consumers. As such, the court vacated two of the conditions -- one banning Charter from having to offer lower-cost broadband plans, and one prohibiting ISPs from engaging in dodgy behavior out at the edge of the network (interconnection).In its ruling, the court proclaims that the restrictions on interconnection drove up consumer prices:
...194195196197198199200201202203...