![]() |
by Tim Cushing on (#5S8QW)
Facial recognition systems are becoming an expected feature in airports. Often installed under the assumption that collecting the biometric data of millions of non-terrorist travelers will prevent more terrorism, the systems are just becoming another bullet point on the list of travel inconveniences.Rolled out by government agencies with minimal or no public input and deployed well ahead of privacy impact assessments, airports around the world are letting people know they can fly anywhere as long as they give up a bit of their freedom.What's not expected is that the millions of images gathered by hundreds of cameras will just be handed over to private tech companies by the government that collected them. That's what happened in South Korea, where facial images (mostly of foreign nationals) were bundled up and given to private parties without ever informing travelers this had happened (or, indeed, would be happening).
|
Techdirt
Link | https://www.techdirt.com/ |
Feed | https://www.techdirt.com/techdirt_rss.xml |
Updated | 2025-10-04 22:02 |
![]() |
by Daily Deal on (#5S8QX)
Speakly is the fastest way to learn a language. The app combines science and computational algorithms to teach you the 4,000 most statistically relevant words of your target language in order of their importance. Learn different languages such as Estonian, English, Russian, Spanish, and more. Speakly allows you to practice real-life situations right from your smartphone or computer, building your confidence when speaking a foreign language in their everyday life. It's on sale for $70.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5S8N5)
I will admit that, until this morning, I had never heard of Ridley Scott's movie The Last Duel. It was released this fall in theaters only, which is a bold move while we're still dealing with a raging pandemic in which most people still don't want to go sit in a movie theater. And so, the box office results for the movie were somewhat weak. Indeed, it's now Scott's worst performing movie at the box office.The issue, as many pointed out, was that The Last Duel was targeted at older movie-goers. A historical period piece film about a duel in France? Not exactly a hit among the youth market, and older folks are still the most concerned about COVID (which makes sense, considering it's a lot more deadly the older you get).A few weeks ago, Scott admitted he was disappointed in the movie's performance at the box office, but compared it to Blade Runner, which also didn't immediately set the world on fire when it was released, and is now a classic.But, now, having thought about it some more, Scott has decided that it must be Facebook and the kids these days who are at fault for not wanting to see his two and a half hour period piece epic. Going on Marc Maron's WTF podcast, Scott insisted that he had no problems with the way the film was marketed, but ripped into "millennials" (who, um, aren't as young as he seems to think they are) and... Facebook. Because if we've learned anything these days, it's that no matter what goes wrong with your life and plans, you can always blame Facebook for those failures:
|
![]() |
by Karl Bode on (#5S8BY)
A few weeks back, both Verizon and AT&T announced they'd be pausing some aspects of their 5G deployments over FAA concerns that those deployments would create significant safety hazards. The problem: there's absolutely no evidence that those safety concerns are legitimate.The FAA and airline industry claim that use of the 3.7 to 3.98 GHz "C-Band" spectrum to deploy 5G wireless creates interference for avionics equipment (specifically radio altimeters). But the FCC has closely examined the claims and found no evidence of actual harm anywhere in the world, where more than 40 countries have deployed C-band spectrum for 5G use. Just to be sure, the FCC set aside a 220 MHz guard band that will remain unused as a sort of buffer to prevent this theoretical interference (double the amount Boeing requested).None of this was enough for the FAA. That's of major annoyance to AT&T and Verizon, which paid $45.45 billion and $23.41 billion respectively earlier this year for C-band spectrum, and have been widely and justifiably critcized for underwhelming 5G network performance and availability so far. Consumer advocates and policy experts like Harold Feld are also confused as to why the FAA continues to block deployment in these bands despite no evidence of actual harm:
|
![]() |
by Tim Cushing on (#5S84M)
It's not enough that law enforcement can seize property if it pretends it must be linked to some criminal endeavor, even if the cops can't be bothered to actually find any direct evidence of said criminal endeavor… or even bring charges against forfeiture victims. It's not enough that almost anything can be seized when accompanied by criminal charges, which can lead to officers stripmining someone's residence while serving warrants.When alleged criminals are difficult to find, sometimes the cops just take stuff from crime victims. Multiple people are suing the Baltimore Police Department for grabbing all sorts of property from shooting victims while they were hospitalized and recovering from their injuries.It would seem these seizures would be illegal, but the Baltimore PD pretends it's all about rounding up evidence -- even when said "evidence" has nothing to do with the crime being investigated.
|
![]() |
by Tim Cushing on (#5S7TC)
The NYPD's war on its oversight continues. The secretive law enforcement agency has spent years fighting accountability and transparency, making up its own rules and engaging in openly hostile actions against public records requesters, city officials, internal oversight, and the somewhat-independent CCRB (Civilian Complaint Review Board). Journalists say the NYPD is worse than the CIA and FBI when it comes to records requests. The FBI and CIA say it's worse than a rogue state when it comes to respecting rights.The NYPD probably doesn't wonder why it houses bad cops. In fact, it probably doesn't not even consider the worst of its ranks to actually be "bad" cops. Making things worse on the accountability side, the NYPD answers to two very powerful law enforcement unions, which makes it all but impossible for the department to punish bad cops, even if it wanted to. And while it's subject to public oversight via the CCRB, it has the power to override the board's decisions to ensure cops engaged in misconduct aren't punished too harshly for violating rights and destroying the public's trust.The NYPD began wearing body cameras in 2017 as part of comprehensive reforms put in place by consent decrees issued by federal courts presiding over civil rights lawsuits over the NYPD's surveillance of Muslims and its minority-targeting "stop and frisk" program.But body cameras continue to be mostly useful to prosecutors and of negligible value to the general public that was supposed to benefit from this new accountability tool. As ProPublica reports, even the civilian oversight board can't get the NYPD to hand over footage crucial to investigations of misconduct.
|
![]() |
by Timothy Geigner on (#5S7N1)
id Software is not a complete stranger to silly IP enforcement actions. Between trying to own concepts that are un-ownable and occasionally trying to throw its legal muscle around to bully others into not using common words in their own video game titles, the company has proven that it is perfectly capable of playing the IP bully. But at least in those specific instances, if you squint at them, they kinda sorta seem like industry-related, almost understandable IP disputes.When it comes to how id Software enforces its venerated Doom trademarks, however, that is not the case. The company has a history of opposing and/or sending C&Ds to all kinds of barely related or unrelated commercial entities for trying to register anything that has to do with the word "doom": podcasts, festivals, and entertainment properties. And now, it appears, thrash metal bands too.Dustin Mitchell, like many of us in recent years, came across the term "doomscrolling" and decided that "Doomscroll" would be a cool name for his next metal band. After having the idea, he decided to apply for a trademark on the name for musical acts. And then came the opposition from id Software.
|
![]() |
by Tim Cushing on (#5S7HA)
Earlier this year, a man, wrongfully arrested and imprisoned for murder, was finally able to prove his innocence by producing rental car receipts showing he could not have possibly committed the crime. When the murder occurred, Herbert Alford was twenty minutes away from the scene of the crime, renting a car from Hertz.He requested this exculpatory evidence from Hertz, but it took the company three years to locate the receipts that proved his innocence. All told, Alford spent five years in jail on the bogus charge. Hertz apologized for spending more than 1,000 days "searching" for the rental records, but that apology doesn't put years back on Alford's life, a half-decade of which he lost to the penal system for a crime he didn't commit.This isn't Hertz's only problem. The company is apparently pretty lax when it comes to record-keeping, which has resulted in people being accosted -- often at gunpoint -- by police officers who've been (wrongly) informed the rental car they've pulled over is stolen. The proliferation of automatic license plate readers means these stops will only become more frequent as time goes on, as hotlists hit cameras capable of collecting millions of plate/location records a year. And that proliferation comes with a cost: the potential killing of innocent people because law enforcement has been misinformed about the status of a rented car.Hertz is apparently still trying to get people killed. As it emerges from bankruptcy, it is facing lawsuits over its potentially deadly mistakes. The company is still sending out hotlists to law enforcement, misinforming armed officers that legally-rented vehicles are actually stolen. This appears to be a long-running problem for the rental agency.
|
![]() |
by Glyn Moody on (#5S7DX)
Hikvision describes itself as "an IoT solution provider with video as its core competency". It hasn't cropped up much here on Techdirt: it was mentioned earlier this year as one of two surveillance camera manufacturers that had been blacklisted by the US government because they were accused of being "implicated in human rights violations and abuses" in Xinjiang. Although little-known in the West, Hikvision is big: it has "more than 42,000 employees, over 20,000 of which are R&D engineers." Given the many engineers Hikvision employs, the following comment by Fred Streefland, Director of Cybersecurity and Privacy at Hikvision EMEA (Europe, the Middle East and Africa), reported by IPVM, is rather remarkable:
|
![]() |
by Tim Cushing on (#5S7BJ)
Another day, another revelation about the abuse of NSO malware by its customers. The latest report shows NSO Group's powerful Pegasus malware was used to target Palestinian human rights activists. Citizen Lab is again on the case, providing the forensic examination of the detected malware and coming to this conclusion:
|
![]() |
by Daily Deal on (#5S7BK)
The All-in-One Microsoft, Cybersecurity, And Python Exam Prep Training Bundle has 6 courses to help you gain the skills needed to become a tech professional. The courses contain hands-on lessons and exam prep for Python MTA, ITIL, CompTIA Cybersecurity, and GDPR certification exams. The bundle is on sale for $29.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5S78V)
Former President Donald Trump really has perfected every little thing he doesn't like being a grievance that he thinks he can sue over. It's funny because the Republican Party used to insist that "the left" was the party of victimhood, and yet in Trumpist world, they're always victims all the time, and always have to whine about how victimized they are. The latest is that Trump is literally threatening to sue the Pulitzer Prize Committee if they refuse to retract the 2018 prize that was given to the NY Times and the Washington Post for reporting on Russia's attempted interference with the 2016 Presidential campaign.In a letter sent to the Pulitzer Committee, Trump lawyer Alina Habba has some, well, bizarre theories about basically everything.
|
![]() |
by Karl Bode on (#5S70S)
As companies and governments increasingly hoover up our personal data, a common refrain is that nothing can go wrong because the data itself is "anonymized" -- or stripped of personal identifiers like social security numbers. But time and time again, studies have shown how this really is cold comfort, given it takes only a little effort to pretty quickly identify a person based on access to other data sets. Yet most companies, many privacy policy folk, and even government officials still like to act as if "anonymizing" your data actually something.That's a particular problem when it comes to user location data, which has been repeatedly abused by everybody from stalkers to law enforcement. The data, which is collected by wireless companies, app makers and others, is routinely bought and sold up and down a major chain of different companies and data brokers providing layers of deniability. Often with very little disclosure to or control by the user (though companies certainly like to pretend they're being transparent and providing user control of what data is traded and sold).For example, last year a company named Veraset handed over billions of location data records to the DC government as part of a COVID tracking effort, something revealed courtesy of a FOIA request by the EFF. While there's no evidence the data was abused in this instance, EFF technologist Bennett Cyphers told the Washington Post Veraset is one of countless companies allowed to operate so non-transparently. Nobody even knows where the datasets they're selling and trading are coming from:
|
![]() |
by Leigh Beadon on (#5S675)
This week, Stephen T. Stone takes both top spots on the insightful side. In first place, it's a comment about the FBI raid on Project Veritas:
|
![]() |
by Leigh Beadon on (#5S5BE)
Five Years AgoThis week in 2016, we were dealing with the fallout of Trump's election. It was apparent that the First Amendment was under attack given things like Trump's constant whining about the New York Times, we spotted some big copyright problems with Trump's transition website, the incoming administration was preparing to gut the FCC's reforms, and the TTP was dead (for the wrong reasons) but we feared what would come next. At the same time, the role of fake news and fact checking became a prominent subject as people tried to figure out what happened.Ten Years AgoThis week in 2011, it's no surprise that the single biggest subject was SOPA. The House Judiciary Committee was holding hearings that were stacked five-to-one in favor of censoring the internet (though they insistently denied this was the case) — and they were predictably a lovefest for the bill. We featured pieces on how SOPA would be bad for filmmakers, online music services, VPNs and other important security and privacy tools, video games, investment in innovation, the health of Americans, and even the websites of Canadians. Opposition to the bill started lining up: major internet companies, lawyers and law professors, hackers, the ACLU, consumer rights groups, human rights groups, and all kinds of other people — not to mention general public opinion. But the fight was far from over...Fifteen Years AgoThis week in 2006, we took a look at the intense hatred of the RIAA for the Consumer Electronics Association which was aptly opposing their propaganda, one defendant in a RIAA lawsuit was hoping to be covered by the settlement with Kazaa, and Larry Lessig was challenging the constitutionality of opt-out copyright. The MPAA was suing a firm for loading legally-owned DVDs onto iPods, while Universal Music was going after MySpace. There was still a lot of bandwagon-jumping from companies when it came to social media features and video sharing platforms, while we were concerned about YouTube's trigger-happy lawyers going after third-party tools. Also, we saw an important Section 230 ruling in a lawsuit against Craigslist.
|
![]() |
by Tim Cushing on (#5S4RS)
It's not often a citizen shoots at a cop and lives to tell about it. It's even rarer when they walk away from criminal charges. When it's considered "assault" to be anywhere in the general location of an angry cop, actual shots fired tend to be greeted with severe charges. Acquittals are unicorns in the court system, which largely tends to believe people who shoot at cops always have zero justification for their actions.One black Minnesota resident has bucked the odds. Jaleel Stallings was arrested after he fired his gun three times at Minneapolis police officers during protests following the murder of George Floyd by Officer Derek Chauvin. Without further facts, one would assume Stallings was part of the problem, a violent protester willing to kill or injure police officers.But the facts are in. And they are ugly. Stalling, represented by Eric Rice, was able to secure an acquittal thanks to some extremely damning body camera footage recorded by the officers he shot at. The footage was captured by officers in an unmarked van that were driving into areas "enforcing" curfew by shooting rubber bullets at random people on the street from the moving vehicle. That recording can be viewed here.Deena Winter has an amazing write up of the recording's contents at Minnesota Reformer. It shows cops aggressively targeting people doing nothing more than violating a curfew order. The officers in the unmarked van trolled Lake Street, looking for victims to absorb their violent policing, apparently in retaliation for having to endure ongoing protests triggered by other violent actions by other violent cops.Here's just a taste of what the recording contains:
|
![]() |
by Mike Masnick on (#5S4KR)
As you'll recall, a few months ago, former President Donald Trump sued Facebook, Twitter, and YouTube claiming that his own government violated the 1st Amendment... because those three private companies kicked him off their services for violating their policies. Yes, the premise of the lawsuit is that while he was president, the actions of three private companies somehow proved that the government (which he ran) was violating his rights. The lawsuits are nonsense and they have not gone well for Trump at all. Part of the (very) ridiculous argument is that Section 230 is unconstitutional.The lawsuit against Twitter was recently transferred from Florida (where Trump filed it) to the Northern District of California (where Twitter wanted it), and now the Justice Department has said it will be entering the case specifically to defend the constitutionality of Section 230.
|
![]() |
by Timothy Geigner on (#5S4FW)
We have talked a long, long time about how the concept of content moderation at the kind of scale of the largest internet and social media platforms is essentially impossible. But it's not just content moderation that is proving difficult for those platforms. Policing those platforms for anything that relies on user-based input is difficult as well. For instance, Instagram recently found out that its process for locking up the accounts of the deceased may need some work, as one person was able to get Instagram founder Adam Mosseri's Instagram account locked.
|
![]() |
by Mike Masnick on (#5S4DW)
Miramax, the film studio originally founded by Harvey Weinstein before being sold to Disney, then spun out, and currently owned jointly by a Qatari media company, beIN, and ViacomCBS, is in the news for suing Quentin Tarantino over his collection of NFTs about Pulp Fiction -- one of Miramax's biggest hit films in the 90s, and the one that put Tarantino on the map. Like many other content creators, Tarantino is exploring the NFT space, and his experiment is actually somewhat interesting. It's using a modification of typical NFTs, where some (or all) of the content remains "access controlled" and only available to the purchaser. In other words: it's DRM'd NFTs, which seems to miss the entire point of NFTs, which is creating a new scarcity of ownership without the scarcity of content access. But, hey, it's Hollywood, so restricting access and using DRM is kind of in their DNA.That said, the NFTs are supposedly handwritten bits of the original Pulp Fiction screenplay. From the page:
|
![]() |
by Nirit Weiss-Blatt on (#5S4BB)
The roll-out of the “Facebook Papers” on Monday October 25 felt like drinking from a fire hose. Seventeen news organizations analyzed documents received from the Facebook whistleblower, Frances Haugen, and published numerous articles simultaneously. Most of the major news outlets have since then published their own analyses on a daily basis. With the flood of reports still coming in, “Accountable Tech” launched a helpful aggregator: facebookpapers.com.The volume and frequency of the revelations are well-planned. All the journalists were approached by a PR firm, Bryson Gillette, that, along with prominent Big Tech critics, is supporting Haugen behind-the-scenes. “The scale of the coordinated roll-out feels commensurate with the scale of the platform it is trying to hold accountable,” wrote Charlie Warzel (Galaxy Brain).Until the “Facebook Papers,” comparisons of Big Tech to Big Tobacco didn’t catch on. In July 2020, Mark Zuckerberg of Facebook, Sundar Pichai of Google, Jeff Bezos of Amazon, and Tim Cook of Apple were called to testify before the House Judiciary Subcommittee on Antitrust. A New York Times headline claimed the four companies prepare for their “Big Tobacco Moment.” A year later, this label is repeatedly applied to one company out of those four, and it is, unsurprisingly, a social media company.TECHLASH 1.0 started off with headlines like Dear Silicon Valley: America’s fallen out of love with you (2017). From that point, it becomes a competition “who slams them harder?” eventually reaching: Silicon Valley’s tax-avoiding, job-killing, soul-sucking machine (2018).In the TECHLASH 2.0 era, the antagonism has reached new heights. The “poster child” for TECHLASH 2.0 - Facebook - became a deranging brain implant for our society or an authoritarian, hostile foreign power (2021). In this escalation, virtually no claim about the malevolence of Big tech is too outlandish in order to generate considerable attention.As for the tech companies, their crisis response strategies have evolved as well. As TECHLASH 2.0 launched daily attacks on Facebook its leadership decided to cease its apology tours. Nick Clegg, *Facebook VP of Global Affairs, provided his regular “mitigate the bad and amplify the good” commentary in numerous interviews. Inside Facebook, he told the employees to “listen and learn from criticism when it is fair, and push back strongly when it is not.”Accordingly, the whole PR team transitioned into (what company insiders call) “wartime operation” and a full-blown battle over the narrative. Andy Stone combated journalists on Twitter. In one blog post, the WSJ articles were described as inaccurate and lacking context. A lengthy memo called the accusations “misleading” and some of the scrutiny “unfair.” Zuckerberg’s Facebook post argued that the heart of the accusations (that Facebook prioritizes profit over safety) is “just not true.” Another spokesperson simply called this notion a lie.On Twitter, Facebook’s VP of Communications referred to the embargo on the consortium of news organizations as an “orchestrated ‘gotcha’ campaign.” During Facebook’s third-quarter earnings call, Mark Zuckerberg reiterated that “what we are seeing is a coordinated effort to selectively use leaked documents to create a false picture about our company.”Moreover, Facebook attacked the media for competing on publishing those false accusations: “This is beneath the Washington Post, which during the last five years competed ferociously with the New York Times over the number of corroborating sources its reporters could find for single anecdotes in deeply reported, intricate stories,” said a Facebook spokeswoman. “It sets a dangerous precedent to hang an entire story on a single source making a wide range of claims without any apparent corroboration.”Facebook’s overall crisis response strategies revealed the rise of VADER:
|
![]() |
by Daily Deal on (#5S4BC)
NYTSTND QUAD lets you charge up to 4 devices all at once. You can wirelessly charge 3 devices at once with the 5-coil full surface area and the built-in Genuine Apple Watch Magnetic charger. An integrated USB-C connector is available for charging another compatible device. This charging dock packs in protection protocols to keep your device and the charger itself safe. NYTSTND QUAD combines aesthetics and performance in a compact, elegant design. Made of premium Amish sourced wood and high-quality real leather, NYTSTND QUAD is designed to blend into any environment. It comes in four different colors and is on sale for $213.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5S45V)
So, just yesterday we wrote about how the FBI's raid of Project Veritas's founder and a few associates was concerning from a press freedom standpoint -- and that you should be concerned even if you believe that Project Veritas are a bunch of dishonest grifters. However, beyond being a bunch of dishonest grifters -- who still deserve press freedoms -- it appears that Project Veritas are also a giant bunch of hypocrites.All week they've been grandstanding about press freedoms... while at the same time they hired the law firm of Clare Locke -- a firm that brags about silencing the press -- to try to silence the NY Times. Incredibly, so far it has worked. Project Veritas and Clare Locke successfully got a judge in NY to issue a ridiculously broad order requiring that the NY Times delete information it had in its possession and then stop reporting on certain aspects of Project Veritas' behavior.This is straight up prior restraint.
|
![]() |
by Karl Bode on (#5S3X4)
Making it annoying to cancel a subscription has long been a proud American pastime. AOL was notorious for making it extremely difficult. Broadband and cable providers routinely make it a pain in the ass to cancel phone, broadband, or cable. And everyone from the Wall Street Journal to SiriusXM enjoy making signing up easy, but cancelling something that requires a phone call and time on hold. In COVID times, with support staffs often short handed, it's a practice that's become more annoying than ever.In an about face from decades of regulatory apathy, Lina Khan's FTC has announced that the agency is going to start cracking down on companies that trick users into signing up for services, or making it an annoying headache to cancel:
|
![]() |
by Timothy Geigner on (#5S3AZ)
We've been talking a great deal about Take-Two Interactive and Rockstar Games lately as it relates to their aggressive actions on modding communities for the Grand Theft Auto series. This new war on modders really kicked off over the summer, with the companies looking to shut down a bunch of mods that mostly brought old GTA content into newer games for retro fans. Then came one modding group managing to reverse engineer the game to create its own version of the source code, which it posted on GitHub. Rockstar DMCA'd that project, but at least one modder managed to get GitHub to put it back up. That project was called "GTA RE3" and was supposed to be the basis to let other modders do all sorts of interesting things with the game from a modding standpoint, or to forklift the game onto platforms it wasn't designed for, say on a Nintendo console. Take-Two and Rockstar then cried "Piracy!" and filed a lawsuit.That's typically where the story would end. The modding group would hide or run away if they could, or they would settle the suit for fear of a long and protracted legal process. But that doesn't appear to be the case here, as the four men behind RE3 have responded to the suit, denying all accusations and asserting fair use. The response from the modder's attorneys is embedded below, but mostly consists of outright one-sentence denials or assertions that the claims in the suit aren't such that they have enough knowledge to affirm or deny them, and therefore deny them. But there are also some nuggets in there that tease out what the defense would be if this thing proceeds.
|
![]() |
by Tim Cushing on (#5S360)
Law enforcement doesn't just engage in pretextual stops of cars. Bicyclists are on the radar as well, especially if they happen to be minorities. That's according to data obtained by the Los Angeles Times, which shows the LA Sheriff's Department (which has buried the needle on the far end of "problematic" for years) is targeting bike riders with tactics that fall somewhere between pretextual stop and stop-and-frisk.
|
![]() |
by Mike Masnick on (#5S31T)
I am no fan of Project Veritas. They appear to be a group of malicious grifters, deliberately distorting things, presenting them out of context to fit (or make) a narrative. Even so (or perhaps, especially so), we should be extremely concerned about the FBI's recent raid on Project Veritas' founder James O'Keefe and two of his colleagues.The FBI and DOJ say they're investigating the apparent theft of a diary belonging to Joe Biden's daughter, Ashley, which later ended up in Project Veritas' hands. But, as we've discussed for many years, there are serious 1st Amendment questions involved when the government is raiding the homes of journalists and seizing their computers, phones, and other records. I'm assuming that some of you are going to say that this shouldn't matter because O'Keefe and Veritas aren't "real journalists," and we'll get to that argument later. But the simple fact is that after many years (and multiple administrations lead by both parties) in which the DOJ felt free to collect journalist records, earlier this year, we were told that the DOJ was finally going to no longer sweep up journalist records (though even then it noted that didn't apply in cases where the journalists themselves were targets of a criminal investigation -- as was the case here).However, unless there's really strong evidence indicating that Project Veritas was involved in the actual theft of the diary, if the organization was merely the recipient of that diary, then these raids raise many, many concerns about violations of press freedoms and the use of law enforcement to intimidate the press.Many others seem to be similarly concerned, as this is raising a lot of alarm bells for those who work on press freedom issues:
|
![]() |
by Tim Cushing on (#5S2ZV)
No news is the only good news for Israeli tech company NSO Group. The problem is it's impossible to generate no news when you can't go more than a few days without generating more bad news.Since the leak of data showing its customers were targeting journalists, activists, religious leaders, and other government officials with powerful malware capable of intercepting cellphone communications, the headlines NSO has racked up range from bad to worse to nightmarish.Multiple countries are now following up on investigations performed by entities like CitizenLab, performing investigations of their own to determine whether they've been breached by NSO's malware or if government customers have violated rights. The United States has effectively blacklisted the company, forbidding US government agencies from buying its products and US exploit developers from selling to NSO.One country was host to a large percentage of the numbers on the leaked list of potential NSO Group malware targets: Mexico. 15,000 of the 50,000 phone numbers on the list were located in that country. Perhaps unsurprisingly, Mexico is home to the first arrest related to abuse of NSO spyware.
|
![]() |
by Glyn Moody on (#5S2TS)
Although the COVID-19 pandemic has wreaked terrible suffering across the world, we are fortunate that we already have several vaccines that have been shown to be highly effective in reducing the number of deaths and hospitalization rates. Discovering vaccines proved easier than expected, but ensuring that everyone – including people in developing countries – has access to them has proved much harder. The main reason for that is an intellectual monopoly: patents. Even though at least two of the main vaccines were developed almost entirely using public funds, which ought by rights to mean that the results are in the public domain, companies have obtained exclusionary patents on them. This has led to calls for a patent waiver of some kind to allow countries to produce their own supplies of medicines, without needing to pay licensing fees.The proposal from India and South Africa to the World Trade Organization (WTO) does not mention patents at all, but lists instead what the waiver seeks to achieve: "the prevention, containment and treatment of COVID-19". A paper from Sean Flynn, Erica Nkrumah and Luca Schirru points out that works covered by copyright also need a waiver if we are to combat COVID-19 effectively. For example:
|
![]() |
by Daily Deal on (#5S2TT)
The 2021 Complete Video Production Super Bundle has 10 courses to help you learn all about video production. Aspiring filmmakers, YouTubers, bloggers, and business owners alike can find something to love about this in-depth video production bundle. Video content is fast changing from the future marketing tool to the present, and here you'll learn how to make professional videos on any budget. From the absolute basics, to screenwrighting to the advanced shooting and lighting techniques of the pros, you'll be ready to start making high quality video content. You'll learn how to make amazing videos, whether you use a smartphone, webcam, DSLR, mirrorless, or professional camera. It's on sale for $35.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5S2QY)
Dealing with disinformation is not an easy problem to solve. Part of the problem is that very few people even agree how to define disinformation, or how subjective it is. Indeed, as we've noted, most of the reporting on disinformation itself is misinformation (or, at the very least misleading). That said, I still had decently high hopes for the Aspen Institute's "Commission on Information Disorder." The Aspen Institute tends to do more credible and serious work on tech policy issues than many other groups. And the project was supported by Craig Newmark, who has been funding a bunch of important research over the past few years. And, while some of the choices for who was on the Commission struck me as odd (Prince Harry?!? Katie Couric?!?), there were some very serious and very thoughtful participants on the Commission itself, acting as "advisors" to the group, and who participated in the various discussions they held.But, perhaps the wide range of perspectives of people involved was more of a hindrance than a help. The final report that was just released is a kind of punt -- in which the Commission effectively tries to "split the baby" -- by offering a kind of middle of the road perspective, without realizing that splitting the baby was never the real goal of the anecdotal judgment.That is, the report more or less acknowledges that the real problems are much more fundamental than disinformation (indeed, it quotes me on this very point -- even giving me a pull quote treatment -- though at no point did anyone involved in this project reach out to me or ask me for any input), but then still puts in place recommendations that don't seem to acknowledge this reality. So upfront it admits that there is a fundamental issue in that a society with arbiters of truth is not a free society:
|
![]() |
by Karl Bode on (#5S2EP)
For years, we've noted how telecom and media giants have been trying to force "big tech" to give them huge sums of money for no reason. The shaky logic usually involves claiming that "big tech" gets a "free ride" on telecom networks, something that's never actually been true. This narrative has been bouncing around telecom policy circles for years, and recently bubbled up once again thanks to FCC Commissioner Brendan Carr.Carr's push basically involves parroting AT&T's claim that big tech should be funding AT&T network upgrades. You're to ignore the fact that giants like AT&T routinely take billions in tax breaks and subsidies for network upgrades that never arrive. This quest to punish "big tech" with unnecessary new surcharges is something that's also supported by the National Association of Broadcasters, who have long hated companies like Microsoft's efforts to use unlicensed spectrum from unused television channels (aka "white spaces") to deliver new broadband options.The FCC does desperately need to find more funding revenue to shore up programs like the Universal Service Fund (USF) and E-Rate, which help provide broadband access to schools and low income Americans. So it recently announced it would be considering a new tax on unlicensed spectrum. Pressured by NAB, the Biden FCC's plan would assess regulatory fees on “unlicensed spectrum users,” which would include users of Wi-Fi, Bluetooth and other consumer wireless devices. It's a tax on tech, proposed by telecom and media companies that want to punish their ad and data collection competitors in tech.Harold Feld, who probably knows more about wireless spectrum policy than anybody, has penned a helpful piece over at Forbes explaining why this is a terrible idea. He outlines that NAB's real goal is to punish companies like Microsoft for daring to use spectrum the broadcast industry falsely believes belongs to them:
|
![]() |
by Timothy Geigner on (#5S1T9)
Usually when a company does something that results in a public backlash, that company will stop digging holes. Over the summer, we wrote about Rockstar Games and its parent company, Take-Two Interactive, starting a war on modding communities for the Grand Theft Auto series. After years of largely leaving the modding community alone, these companies suddenly started targeting mods that were chiefly designed to put content or locations for older GTA games into GTA5. While the public was left to speculate as to why Take-Two and Rockstar were doing this, the theory that perhaps it meant they were planning to release remastered versions of older games eventually turned out to be true when GTA Trilogy was announced. In other words, these companies were happy to reap all the benefits of an active modding community right up to the point where they thought they could make more money through a re-release, at which point the war began.And, as we also covered recently, the PC release for GTA Trilogy went roughly as horribly you can imagine. While the game was released and purchased by many, mere days afterwards Take-Two not only delisted those games from marketplaces, but also experienced "unscheduled maintenance" on Rockstar's game launcher, meaning owners of that game and several other Rockstar games couldn't play the games they'd bought. That eventually got corrected several days later, but it was a terrible look, especially when combined with how little information Rockstar provided the public as it was going on. Many paying customers were very, very angry.So, did Take-Two and Rockstar reverse course? Nope! Instead, it seems that the war on the modding community is only accelerating.
|
![]() |
by Copia Institute on (#5S1N8)
Summary: Dealing with content moderation involving user generated content from humans is already quite tricky — but those challenges can reach a different level when artificial intelligence is generating content as well. While the cautionary tale of Microsoft’s AI chatbot Tay may be well known, other developers are still grappling with the challenges of moderating AI-generated content.AI Dungeon wasn't the first online text game to leverage the power of artificial intelligence. For nearly as long as gaming has been around, attempts have been made to pair players with algorithmically-generated content to create unique experiences.AI Dungeon has proven incredibly popular with players, thanks to its use of powerful machine learning algorithms created by Open AI, the latest version of which substantially expands the input data and is capable of generating text that, in many cases, is indistinguishable from content created by humans.For its first few months of existence, AI Dungeon used an older version of Open AI's machine learning algorithm. It wasn't until Open AI granted access to the most powerful version of this software (Generative Pre-Trained Transformer 3 [GPT-3]) that content problems began to develop.As Tom Simonite reported for Wired, Open AI's moderation of AI Dungeon input and interaction uncovered some disturbing content being crafted by players as well as its own AI.
|
![]() |
by Leigh Beadon on (#5S1H7)
We've got a crossposted episode for you this week: Mike recently joined The Cato Daily Podcast with Caleb O. Brown for a discussion about the "hacking" fiasco in Missouri and the state's treatment of the journalists who exposed its huge data security flub. It's a shorter conversation than our usual podcasts, and you can listen to the whole thing on this week's episode.Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.
|
![]() |
by Karl Bode on (#5S1CZ)
We had just got done noting that it didn't seem like Apple had learned a whole lot from the last few years of "right to repair" backlash, making it harder to replace iPhone 13 screens. But not only did the company (partially) backtrack from that decision, they've made another shocking pivot: they're actually making phone parts and documentation more accessible to Apple customers. The move, announced in a company press release, should make it significantly easier for Apple customers to repair their devices at home:
|
![]() |
by Cathy Gellis on (#5S1A8)
We've talked a lot about the Florida law SB 7072 that attempts to regulate social media platforms. In broad strokes, it tries to constrain how at least certain Internet platforms moderate their platforms by imposing specific requirements on them about how they must or may not do so. That law is now being challenged in court. The district court enjoined it, and Florida has now appealed to the Eleventh Circuit to have the injunction overturned. This week the Copia Institute joined others in filing amicus briefs in support of maintaining the injunction.As we told the told the court, the Copia Institute wears two hats: One hat we wear is as commentators on the issues raised by the intersection of technology and civil liberties, which laws like Florida's impact. Meanwhile, the other hat is the one we wear by sitting at this crossroads ourselves, particularly with respect to free speech.To operate Techdirt, the Copia Institute needs robust First Amendment protection, and also Section 230 protection, to both convey our own expression and to engage with our readers, including in our comments section. Unfortunately the Florida law impermissibly targets both sets of rights. And this constitutional and statutory incursion affects every Internet platform, and all the user speech they facilitate, including us and ours, even if we don't all fall directly into its crosshairs.The Florida law's enforcement crosshairs are especially arbitrary, ostensibly targeting companies with very high revenue, or very large audiences, unless, of course, they happen to also own a theme park… But one thing we told the court is that the specific details don't really bear on the law's overall constitutional and statutory defects. Part of the reason is because if Florida could pick these arbitrary criteria, which might not apply to certain platforms, another state could pass a law with different criteria that would reach more, and then these platforms would still be left having to cope with a fundamentally impermissible law.Also, it's not clear that even small entities like ours might not be able to attract the larger audiences the Florida law describes since that's at the very heart of what we try to do as an enterprise: have reach and influence. The point of the First Amendment is to make it possible for outlets like ours to connect with readers – only thanks to laws like this, we could end up punished with onerous regulation we couldn't possibly comply with should we succeed. And that sort of punitive deterrence to expression is not something the First Amendment, or even Section 230, permit.But even if Techdirt could remain safe from the reach of a law like this, it would still hurt us if it hurt other platforms, because we need the help of other platforms to help our message get out too. Indeed, the whole point of the Florida law is ostensibly to help people use these other platforms to get their messages out. Only the upshot is that the law does the exact opposite by salting the regulatory earth so that no platform can safely exist to help users do that.
|
![]() |
by Daily Deal on (#5S1A9)
AnyFix is your one-stop solution to fix various iOS/iPadOS/tvOS/iTunes issues in minutes, and bring your Apple devices back to normal without data loss. AnyFix innovatively offers 3 repair modes for you to choose from based on how severe your problem is. It makes every effort to ensure you can fix the system problem on your iPhone, iPad, iPod touch, or Apple TV, with the highest ever success rate. AnyFix gives a one-click solution to fix 200+ iTunes errors. Can’t download/install/update iTunes? iTunes won't recognize iPhone? See an error when you back up, restore, or sync the device? Now you can solve any of them or other problems by simply clicking a button. No tedious operations are needed and no data loss at all. It's on sale for $40.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5S16Q)
It still is amazing to me how many people in the more traditional media insist that social media is bad and dangerous and infecting people's brains with misinformation... but who don't seem to recognize that every single such claim made about Facebook applies equally to their own media houses. Take, for example, CNN. Last week it excitedly blasted out the results of a poll that showed three fourths of adults believe Facebook is making society worse.Now, there is an argument that Facebook has made society worse, though I don't think it's a particularly strong one. For many, many people, Facebook has been a great way to connect and communicate with friends and family -- especially during a pandemic when many of us have been unable to see many friends and family in person.Either way it's undeniable that the traditional media -- which, it needs to be noted, compete with social media for ad dollars -- has spent the last five years blasting out the story over and over again that pretty much everything bad in the world can be traced back to Facebook, despite little to no actual evidence to support this. So, then, if CNN after reporting about how terrible and evil Facebook is for five years, turns around and polls people, of course most of them are going to parrot back what CNN and other media have been saying all this time. Hell, I'm kind of surprised that it's only 76% of people who claim Facebook has made society worse.I mean, just in the past couple months, every CNN story I can find about Facebook seems to be completely exaggerated, with somewhat misleading claims blaming pretty much everything wrong in the world on Facebook. It's almost like CNN (and other media organizations) are in the business of hyping up stories to manipulate emotions -- the very thing that everyone accuses Facebook of doing. Except with CNN, there are actual human employees making those decisions about what you see. Which is not how Facebook works. Here are just a few recent CNN stories I found:
|
![]() |
by Karl Bode on (#5S0ZM)
We've noted more than a few times how the AT&T Time Warner and DirecTV mergers were a monumental, historical disaster. AT&T spent $200 billion to acquire both companies thinking it would dominate the video and internet ad space. Instead, the company lost 9 million subscribers in nine years, fired 50,000 employees, closed numerous popular brands (DC's Vertigo imprint, Mad Magazine), and basically stumbled around incompetently for several years before recently spinning off the entire mess for a song.In a new book slated to be released next week, Time Warner CEO Jeff Bewkes didn't hold back when talking about AT&T's absolute incompetence at running a media empire:
|
![]() |
by Timothy Geigner on (#5S0BX)
You may recall that a couple of months ago we discussed Rockstar and Take2, the game studio and publisher behind the Grand Theft Auto series, taking down a fan-made GTA4 mod that aimed to put all of the cities from previous games in one massive map. While this was a labor of love by dedicated fans of the GTA series, it escaped nobody's attention that this action was taken on a mod started in 2014 just as Rockstar was about to release GTA Trilogy, consisting of remastered versions of GTA3, Vice City, and San Andreas. The very cities the mod looked to input into GTA4. In other words, the fan project was only shut down when the game companies decided to try to make money off this retro love themselves.So how's it going? Well, not too fucking great on the PC side considering that the PC version was pulled down basically everywhere.
|
![]() |
by Tim Cushing on (#5S08V)
In the past week, the federal government has twice(!) been forced to return money it stole from travelers just because it could. In both cases, American citizens were trying to board domestic flights at US airports. And in both cases, despite it not being illegal to carry large amounts of cash on domestic flights, the government decided the cash had to have been illegally obtained, and moved forward with forfeiture proceedings.The first case involves 58-year-old Kermit Warren, a New Orleans native who was accosted by federal agents at the Columbus, Ohio airport. Warren had $30,000 on him which he had planned to use to buy a tow truck for his scrap metal business. Unfortunately, the sale fell through, forcing him to purchase a one-way flight back home.Warren's cash caught the eye of a TSA screener. Screeners are supposed to look for threats to transportation security (that's right in the agency name) and/or contraband. US currency is not contraband but it is definitely on the TSA's radar, thanks to the DEA's purported "anti-drug" efforts. The DEA actually pays screeners to search for cash. Screeners have responded by locating cash far more frequently than explosives or contraband.The TSA alerted the DEA. Agents showed up and questioned Warren. They didn't like his answers -- some of which were untruthful (he falsely claimed to be a retired cop). But it really wouldn't have mattered what Warren's answers were. The DEA wanted the cash and even sworn affidavits from multiple family members and business associates wouldn't have changed what happened next.The DEA brought in a drug dog as a seizure permission slip. The dog (completely unsurprisingly) "alerted" on the money, having "detected" the odor of drugs. Almost all cash in circulation has drug residue on it. That should not be considered probable cause, much less reasonable suspicion. Despite making vague allegations Warren was involved in the illicit drug trade, the DEA let him go. But it kept the money.Warren fought back, a move the government clearly didn't anticipate. It lost spectacularly.
|
![]() |
by Tim Cushing on (#5S05P)
After taking some positive steps towards trimming the growth of qualified immunity it had itself encouraged for years, the Supreme Court decided to reverse course. Two more cases on the court's "shadow docket" were sent back to the appellate levels with instructions to reverse the stripping of qualified immunity from government employees accused of rights violations.Note that refusing to grant qualified immunity does not guarantee a win for the plaintiff. All the removal of this immunity does is allow the court to consider more facts and place unresolved questions in front of a jury… you know, the sort of thing courts are supposed to be doing fairly often. Qualified immunity invocations short circuit the process, allowing courts to arrive at conclusions without further fact-finding by giving law enforcement officers the benefit of a doubt.More bad news from the nation's highest court has arrived. As Radley Balko puts it in his editorial for the Washington Post, the Supreme Court has chosen to abdicate its obligations to the Bill of Rights. (non-paywalled link here)
|
![]() |
by Mike Masnick on (#5S035)
The EU is at it again. Recently Mozilla put out a position paper highlighting the latest dangerous move by busybody EU regulators who seem to think that they can magically regulate the internet without (1) understanding it, or (2) bothering to talk to people who do understand it. The issue is the Digital Identity Framework, which, in theory, is supposed to do some useful things regarding interoperability and digital identities. This could be really useful in enabling more end user control over identity and information (a key part of my whole Protocols, Not Platforms concept). But the devil is in the details, and the details are a mess.It would force browsers to support a specific kind of authentication certificate -- Qualified Web Authentication Certificates (QWACs) -- but as Mozilla points out, that would be disastrous for security:
|
![]() |
by Tim Cushing on (#5RZZZ)
Long before its current run of Very Bad News, Israeli malware purveyor NSO Group was already controversial. Investigations had shown its exploits were being used to target journalists and activists and its customer list included governments known mostly for their human rights abuses.Facebook and WhatsApp sued NSO in November 2019, alleging -- among other things -- that NSO had violated WhatsApp's terms of use by deploying malware via the chat service. The arguments made by Facebook/WhatsApp aren't the best and they could allow the CFAA to be abused even more by expanding the definition of "unauthorized access."Then there's the question of standing, which NSO raised in one motion to dismiss. The alleged harms were to users of the service, not to the service itself. While suing on behalf of violated users is a nice gesture, it's pretty difficult to talk a court into granting your requests for injunctions or damages if you're not the target of the alleged abuse.NSO also pointed out it didn't actually violate anyone's terms of service. Its customers did when they used WhatApp to deliver malware to targets. NSO said WhatsApp was welcome to sue any of its customers, but was unlikely to get anywhere with that either, given the immunity from lawsuits generally handed to foreign governments.Then NSO made a ridiculous claim of its own: it said it was immune from lawsuits since it provided this malware to foreign governments. By extension, it argued, the same immunity protecting foreign sovereigns (i.e., its customers) should be extended to the private company that sold them phone exploits. That argument was rejected by the district court. And the Ninth Circuit Appeals Court has just affirmed [PDF] that rejection, which means NSO will have to continue to fight what is now one of several damaging fires.The Appeals Court says no reasonable reading of the Foreign Sovereign Immunities Act (FSIA) supports NSO's argument in favor of it taking no responsibility for its actions or the actions of its customers.
|
![]() |
by Daily Deal on (#5S000)
AOMEI Backupper Professional Edition is a complete yet simple backup software for Windows PCs and laptops, which includes all features of AOMEI Backupper and supports system/disk/files/partition backup and restore, file sync, and system clone as well as scheduling backup, merge images, dynamic volumes backup, UEFI boot, and GPT disk backup. It's on sale for $29.Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
|
![]() |
by Mike Masnick on (#5RZX3)
It didn't get as much press as some of Facebook whistleblower Frances Haugen's other high profile talks to government inquisitors, but last week, Haugen testified before the rather Orwellian International Grand Committee on Disinformation. This is a bizarre "committee" set up by regulators around the world, but its focus -- and its members -- are kind of notable. Considering that tons of evidence shows that cable news is a much larger vector of disinformation flows to the general public, it seems notable that the "International Grand Committee on Disinformation" seems to only want to pay attention to online disinformation. I mean, it's right in the group's mission:
|
![]() |
by Karl Bode on (#5RZJY)
By now we've well established that regional monopolization, limited competition, and the (state and federal) corruption that enables both (aka regulatory capture) are why US broadband is spotty, expensive, and slow. With neither competent regulatory oversight nor meaningful competition to drive improvements, regional dominant broadband providers simply... don't bother. The end result goes beyond high prices to substandard customer service, privacy violations, net neutrality violations, and unnecessary surcharges, usage caps, and fees they often don't clearly disclose.A recent report from the Institute For Local Self Reliance took a look at how transparent U.S. ISPs are about broadband prices, line restrictions, and hidden surcharges. And the results are about what you'd expect. As in, most U.S. ISPs do a fairly terrible job (quite intentionally) of making it clear how much you'll pay for broadband, what your upstream and downstream speeds are, and whether there's any hidden restrictions or fees you'll face once you sign up.Why? For one, big ISPs don't like making it easy to do direct price comparisons, lest people clearly understand the real impact of limited competition and regional market failure. They also routinely engage in false advertising where they advertise one lower price, then hit you with a bunch of bullshit fees. Big ISPs also tend to hide anything that doesn't make them look good or could showcase their network underinvestment, such as pathetic upload speeds:This is, the report notes, much less of a problem with local community broadband networks. Previous studies out of Harvard had noted community broadband generally offers lower, more transparent prices than large ISPs, and that's showcased again here:Again, this isn't just a failure of competition, but a failure of regulatory oversight and telecom policy. The FCC's 2015 net neutrality rules included a provision requiring that ISPs be transparent about pricing and line restrictions. And while the net neutrality aspect of their 2015 order was repealed by the Trump FCC, the transparency component was not. Still, despite the transparency rule having now existed for six years under two different political parties, the report notes how nobody at the FCC has bothered to enforce it:
|
![]() |
by Ishpreet Singh on (#5RZ3F)
At the turn of the last millennium, there was a wave of optimism surrounding new technologies and the empowerment of the modern digital citizen. A decade later, protestors across North Africa and the Middle East leveraged platforms such as Facebook and Twitter to bring down authoritarian regimes during the revolutions of the Arab Spring and it was believed these technologies would bring about a new flourishing of the worldwide liberal democratic order.Unfortunately, the emancipatory potential of the open internet has been undermined by the latest in a long line of authoritarian regimes hijacking the technology. Autocracies adapted, with China leading the way. Towards the end of the 20th century, the CCP introduced a new political-economic model revolving around centralized rule and a controlled market economy. With this new model, China has successfully broken common political-economic orthodoxy by limiting domestic desire for democracy while maintaining a sizable middle class. A key driver of this is a “comprehensive system of state repression, bolstered by the latest digital technologies.”China has turned applications of key technologies such as artificial intelligence, facial recognition, and data analytics to usher in a new age of digital dictatorships which can spy on its citizens, predict dissent, and censure unwanted information, increasing regime resiliency through a virtually manufactured world safe for autocracy. Many leaders of the developing world have taken advantage of this, as China has exported its model to other countries, such as Venezuela, to restrict the freedoms of the modern citizen.As authoritarian governments increase regime resiliency by leveraging digital technologies, citizens of liberal democracies have trouble trusting these same technologies in no small part due to their exploitation by authoritarians and their agents at home and abroad. This understandable hesitancy may not only lead to a technology gap, but leave democratic institutions vulnerable to threats by these same technologies; one can simply look to Russia’s meddling attempts in recent U.S. elections. The pendulum has swung towards autocracy as digital technologies seem to have had an asymmetrical effect, bolstering authoritarian regimes as the tides turn against the previous post-Cold War wave of liberalism. Pessimists are looking backwards, opening that perhaps digital technologies are in tension with--or even pose a threat to--liberal democratic governments. This does not need to be the case.As pointed out by Mike Masnick, when charting the future of the digital landscape, one should not automatically assume “progress towards a ‘good’ outcome is inevitable and easy,” but nor is the path towards a technological dystopia. Not all progress is good, but if we can’t move forward and can’t stay still, that leaves only one option. By rethinking how we apply these technologies, democracies around the world can find methods by which to both enhance the liberal democratic ideological values, and protect against weaknesses inherent to the political ideology, creating resilient “digital democracies'' to stand against the rising autocratic tide and reinforce their own struggling institutions.Transparency through TechnologyThe main method by which democracies can enhance liberal principles is by creating more transparent governments. Transparency can act to limit the powers of government, promoting freedom and individualism, while making policy and officials more effective and accountable, respectively. An informed, educated citizenry is the cornerstone of any robust liberal democracy, and there’s no reason the technologies of the Third Industrial Revolution shouldn’t be employed in service of this goal. To do this, digital technologies should be leveraged to better illuminate the wants and needs of citizens for more informed policy making, create reliable metrics so constituents can directly measure the effects of politician’s policy decisions, and generally increase the transparency of actions of government officials, law enforcement agencies, and lobbyists.Though digital technologies make it possible for countless data points to be gathered on individual citizens, the potential for these processes goes beyond delivering targeted ads without the development of a CCP-style social credit score. Data analysis allows for the ability to create policy options that better fit citizen’s needs, for example customizability. If insurance companies allow customers to select policy options based on their needs, why can national governments not do the same based on a person's location, age, family size, economic situation, etc? We already see this in tax policy, but with digital technologies, this can be applied more broadly. For example, the U.S. welfare state is notoriously difficult to navigate and runs on systems almost old enough to qualify for Social Security. An embrace of new developments could not only make programs more accessible to those who qualify, but more tailored to their specific needs. Policymaking can be reimagined to create options that incorporate different citizens' wants and needs, and advances in digital technologies allow for a larger, faster, and more diverse analysis of the data required to design and implement these complex solutions.One of the main benefits of artificial intelligence, machine learning, and “big data” is the ability to uncover causal directions between different variables. By using these technologies, it will be easier to understand the direct effects of policy decisions across a number of variables. This level of transparency would create for a more informed citizen as they could see how their representatives' voting habits are directly affecting their bank accounts, the job market, their access to healthcare, etc. For example, governments could use the abundance of financial data to publish reports showcasing how specific tax laws actually affect different demographic groups, taking the guesswork out of evaluating policy decisions. The use of granular data to analyze mistakes in the designation of opportunity zones created by the Tax Cuts and Jobs Act is one example. Digital technologies could also allow for simulations of different policy proposals to see how they might affect an individual, city, or state. Digital twin technologies created by companies like Deloitte and Dassault are already being leveraged by governments to understand how certain decisions on energy and logistics will change how cities operate. Another possible use case of these technologies is policy trials that would allow governments to study the effects of lawmaking for a given geographic area and time period, similar to how businesses run trials on different marketing plans. A “low fidelity” version of this could be seen in 2013 when the Obama administration allowed Colorado and Washington to experiment with legalizing recreational marijuana.While reasonable steps should be taken to anonymize such information, it’s possible to publish it in a manner that’s transparent and easily digestible by the press, policy analysts, and public officials to make it harder for pie-in-the-sky policy proposals to be introduced and adopted without just taking the word of their proponents.Lastly, digital democracy can shed light on the action of government and adjacent officials. The idea is similar to that of China’s surveillance system, but in reverse. If China can use digital technologies to monitor citizens, thereby dissuading populations from making certain choices, why can citizens not do the same to governments? Indeed the adoption of a social credit system was a deliberate, top-down choice by the CCP--not the natural evolution of “big data.” Who’s to say that a liberal democracy can’t flip the script? By capturing and sharing instances of government corruption, police abuse, and lobbying malpractice, society's officials will be dissuaded from making decisions against the public’s interests. Apart from Facebook, and Twitter, a number of specialized platforms are being deployed for that very purpose. Guatemalans have experimented with a social platform that allows users to share examples of police corruption. German made LobbyControl provides transparency on lobbying at the local and EU level. Working together, liberal democracies can share these platforms to create an international system of government accountability.Defense through DigitizationLiberal democratic systems are not without their weaknesses. One such vulnerability is the slowness, as can be seen by America’s sedated and haphazard response to the COVID-19 pandemic. As they can be used to enhance tenets of liberal democracies, digital technologies can also protect against inherent weaknesses by accelerating government response in the face of crisis, preventing the spread of propaganda and polarization, and protecting citizen’s rights to freedom and privacy.AI, ML, and big data can be leveraged in three key ways to help liberal democratic governments with crisis response. First before a crisis strikes, algorithms can analyze data to uncover vulnerabilities in a system before they take hold. This could have been useful in predicting the collapse of the housing market prior to the 2008 financial crisis. Once a crisis has struck, these technologies can ascertain the principal drivers of a crisis so resources can be deployed accordingly. Tools like this could have been useful during the ongoing hunger crisis in Burkina Faso, where the government may have chosen not to close key resources in the food supply chain had they realized that malnutrition has been a larger cause of death than COVID-19. Lastly, these tools can be used in the post-crisis period to understand which policies had the most beneficial impact, helping to prepare for future events, such as the next pandemic.And again, openness with this information makes it possible for more parties to cry foul when something isn’t right. In the case of the 2008 financial crisis, there was a vocal minority sounding the alarm. Still, the talking heads and smartest guys in the room maintained a rosy view and were able to dismiss those critics as Cassandras. Open access by a larger swath of the public to warning signs from reliable sources makes it less likely that those who should know better can take a “nothing-to-see-here” line to be repeated by pundits, public intellectuals, and policymakers.Sometimes weaknesses and vulnerabilities are driven by our own applications of technology and require course correction. In democracies across the globe, it seems as though the public is becoming more polarized. One driver of this is the ease of leveraging technology to sharpen divides in societies to subsequently weaponize public opinion. But it is not technology that is inherently at fault, but the applications of these algorithms to maximize views and profits. Studies show that a large percentage of citizens are less polarized than previously believed. Unfortunately, these moderates may choose to stay away from certain social platforms in order to avoid inflammatory media. But what if the algorithms were rewritten to prop up neutral voices rather than to spread inflammatory content? Moderates may be more willing to use these platforms, and populations would more readily see muted perspectives on an issue. By redesigning the algorithms we use to spread information, digital technologies may be able to turn the tide against tribalization, and subsequently polarization. There’s no neutral design, and the amplification of those who can bring more light than heat and turn down the temperature of online discourse more broadly deserve promotion.As discussed in the previous section, recommendation systems operate on algorithms that we do not quite understand. These algorithms are not inherently undemocratic, but their applications can lead to unwanted side effects infringing on our freedoms and privacy. By understanding how to game these algorithmic recommendation engines, outside actors are able to create media which can influence perspectives and subsequently our decision-making process, in effect limiting our freedoms by breaking the integrity of our autonomy. What makes this even more dangerous is that we are seldom aware of this occurrence, as we scroll through videos, posts, and tweets on autopilot. A solution to this infringement on our online freedoms can be found by assessing these algorithms and redesigning them to serve the purposes we require.Looking to privacy, a number of tools have been created that can provide noise to data, making it difficult for digital entities to uncover insights. As an example, a program on your computer could randomly jump to different websites during your downtime to prevent unwanted AI systems from making accurate recommendations based on your browsing history. Likewise, AI software can add similar noise to online pictures by changing a few pixels' colors. One could apply this noise to their Facebook or Instagram posts to prevent facial recognition software from recognizing the images, while allowing friends and family to see the pictures largely unchanged. These sorts of systems could be used in places like China to confuse digital surveillance technology. One key note to remember: if and when “digital democracies” start to appear, it is important to not cross thresholds into the authoritarian regime — the goal is to increase resiliency without further infringing on our rights to freedom and privacy through digital technologies.Increasing Resiliency through TrustThe examples above serve as a start to the discussion of means through which democracies can become resilient through the usage of digital technologies. But “digital democracies” are not inevitable, as western liberal society has a certain mistrust towards big tech companies who are vital in driving such a transformation. In order to evolve into “digital democracies”, three main societal changes must occur.The first is an establishing of trust between big, Western tech conglomerates and governments. In the US, mistrust of companies like Google, Facebook, and Amazon have led some to call for a breaking up of these giants, but this is not the solution. Without these companies on our side, liberal democracies may not be able to keep up with the same advancements made by Eastern tech conglomerates such as Baidu, Alibaba, Tencent, and Xaiomi. Instead, governments must partner with tech firms in order to more clearly define rules and regulations, as well as responsibilities between the two groups. Without an open, non-overreaching dialogue, the situation will remain hostile, making it difficult to establish technological resiliency.Second is the trust between governments and the general public, in regards to the usage of digital technologies. Partnerships between big tech and governments, as discussed above, may lead to greater issues, as can be seen by China’s use of big tech to create a state-controlled market and social economy. This partnership itself would also be antithetical to principles of liberalism by placing too much power in the hands of the government. Just as the media was once seen as a watchdog over governments and politicians, there must be an independent body that serves as a watchdog over governments and their use of tech. There are nonprofits, such as the Center for Humane Technology that serve to advocate for mission-focused tech development. Similar organization will be necessary to serve as a guardian between governments and the abuse of digital technologies. It is with the existence of independent bodies such as this that populations may begin to trust governments to use technologies to only further ideals of liberalism.The last piece is to establish trust between big tech and the general public. This ties back to transparency. As stakeholders in our own data, people should have a say, or at minimum an understanding of how our information is used. But many of the new AI, ML, and data models utilized by big tech are often seen as “black boxes.” Our data goes in, and a result in the form of a product recommendation, news story, or social media post comes out, without a clear understanding of how the outcome was reached. By opening up algorithms and making them fair, accountable, and transparent, people would feel more comfortable by understanding how their data is truly acquired, assessed, and leveraged. This could be a key step in making digital technologies democratic — it would allow citizens to claim a stake in technology, just as big tech has claimed a stake in our data.Trust between citizens and governments is a fundamental principle of liberalism and democracy, but in today's ever-polarizing society, this can be hard to come by. The situation becomes even more complex when adding tech titans to the mix. Organizations exist to help establish this trust by guiding governments and big tech in more “humane” directions, but it will take cooperation by all stakeholders, along with NGO partners to increase outreach and communication for a more transparent relationship. This is the first step towards increasing resiliency of democracy, a necessary lever to swing the pendulum back to the people.Ishpreet Singh is a recent engineering grad currently working as a strategy consultant.
|
![]() |
by Tim Cushing on (#5RYY1)
[UPDATE]: Well, that was quick. Fenster has been released, which hopefully indicates Myanmar's unelected government is discovering it's a bad idea to pick fights with most of the rest of the world. However, I'm sure it will continue to brutalize its own citizens because those advocating for their rights on a local level won't have the leverage of the US State Department. Here's the statement by the US Secretary of State Antony Blinken celebrating Fenster's unexpected release:
|
![]() |
by Tim Cushing on (#5RYWE)
An American journalist is just one of many victims of a coup that overthrew Myanmar's actual elected government and replaced it with the country's military, which had claimed the election its favored party didn't win had been, in effect, stolen. No election irregularities were discovered, but that didn't matter much to the military, which had the might (but not the right) to seize power.Along with the new government came new rules. Many of those targeted opponents and critics of the unelected government. Plenty of those targeted were journalists. Newspapers that had been at least tolerated under the previous regime were now deemed illegal operations.One of those caught in the new government's net was American-born journalist Danny Fenster. Fenster wrote for a news outlet the coup perpetrators declared illegal shortly after they took power. Thumbing its nose at sanctions imposed on it by dozens of countries, the government hauled Fenster into its kangaroo court and decided the actual facts were too inconvenient to be given any credence by the prosecution.
|