The First Amendment Gives You The Right To Lie, Even With AI
While the celebrity-driven allure of the Scarlett Johansson voicealike story might be an easier headline grab, it is in the dark arts of election dirty trickery where you're more likely to find the kinds of election misinformation concerns that have an impact on society. Indeed, experts have been warning for some time that fake text, images, video and audio generated by artificial intelligence are increasingly less likely to be the stuff of science fiction and more likely to be part of our discourse.
It seems that 2024 might be the year of an inflection point which signals the worries are no longer hypothetical. The most egregious example is a warning from across the pond, when just days before an election last fall, an AI audio recording of Progressive Slovakia's leader Michal Simecka surfaced. In this fake conversation with a journalist, he was boasting about rigging the election (which he ended up losing). In India, social platforms and WhatsApp groups are being flooded with generative AI featuring endorsements from dead people, jailed political figures, and Bollywood stars.
And in the U.S. during this presidential election cycle, we've already seen examples of AI fakes circulating online: photos of Donald Trump with his arms around friendly Black women at a party, photos of Trump hugging and kissing Dr. Anthony Fauci circulated by the Ron DeSantis campaign, and the voice of Joe Biden telling New Hampshire voters in a robocall not to show up for the Democratic primary.This week, the FCC proposed mandatory labeling of AI-generated content in political ads on television and radio.
Gone are the halcyon days of 2019, when a mere video editing trick slowed down Nancy Pelosi's speech in a janky attempt to make her seem drunk. With so many klaxons sounding as a flourishing misinformation culture on social networks collides with real advances in creating AI fakes, the natural question would be what to do about it. The solution from state legislatures seems to be simple: target and ban deepfakes and synthetic content in elections. But these laws not only risk running afoul of free speech protections, they also attempt to solve the problem without dealing with the larger problems of our civic information infrastructure that makes fake media flourish.
Despite their popularity, laws banning synthetic and generative AI content almost certainly violate the First Amendment. That's the result of a study we conducted this spring - The Right to Lie with AI?" - as 11 states passed some version of them or bills banning or limiting synthetic media in campaigns in the 2023-24 sessions, according to tracking by the National Conference of State Legislatures.
Bills that ended up not passing in other states presented troubling alternatives as well and offer a warning about the directions synthetic election content bans could go. The Georgia House passed a bill that made AI imitations of politicians a felony with a minimum 2-year prison sentence, though the bill was tabled by the Senate in March even after one legislator taunted the bill's opponents with AI-generated audio of them supposedly endorsing the bill.
Our analysis looked at the various provisions in this new wave of laws and proposals, as well as the earliest versions of them, the latter of which includes anti-deepfake laws passed by California and Texas in 2019. As of yet, we have been unable to find any evidence that those or other laws have been enforced, nor have they been challenged in court. But they present problematic free speech issues in light of the Supreme Court's decision in U.S. v. Alvarez in 2011, in which the Court struck down the federal Stolen Valor Act and reaffirmed the First Amendment's protection of a speaker's right to lie unless they create some other legally cognizable harm.
As Jeff Kosseff detailed in his new book Liar in a Crowded Theater," the First Amendment has long protected false speech, even if (or especially when) that speech is about political campaigns. While the accused Alvarez was just lying about having served in the military, lower courts applying Alvarez have gone on to strike down false campaign speech laws in Ohio, Minnesota, and Massachusetts over the past decade, finding uniformly that these laws (a) triggered review under the strict scrutiny standard, a heavy burden for the state to meet to defend a speech restriction, (b) address a compelling state interest in fair elections enough to satisfy strict scrutiny, but (c) nevertheless were not narrowly tailored to serve those interests, thus failing the strict scrutiny review. In every one of the cases, citing Alvarez,the court found that the least restrictive means to address false political speech is counterspeech" - that is, the speech should be rebutted in the marketplace of ideas with true speech.
The courts also found practical issues with the laws, which essentially triggered investigations or caused disruption in the weeks before elections occurred - or even after early voting had begun - in a way that could not be resolved in a meaningful way. The inevitable outcome of complaints under false political speech laws would be dirty tricks and gamesmanship, rather than more truthful campaign advertising. And if you think triggering politically-motivated investigations in the days running right up to an election isn't that big of a deal, Hillary Clinton would like a word.
Similar issues plague the current wave of laws, which include provisions such as:
- Electioneering: Basically, these provisions ban use of deepfakes or AI-generated photos, audio, or video in a certain time period before an election, such as a 30-day ban on deep fake videos" in Texas or the 90-day ban on use of AI in election speech in new laws in Minnesota and Michigan. Time limits outlawing speech before an election were frowned upon in the Supreme Court's 2010 decision in Citizens United, and as noted above, it is unlikely any challenge brought in such a short timeframe before an election could be resolved in a meaningful way by the time votes are being cast. These are likely unconstitutional. And they are also likely practically unworkable, with the spread of disinformation online far outpacing any court or agency's ability to investigate and remedy misdeeds.
- Injunctive relief: Despite decades of First Amendment jurisprudence disfavoring gag orders and injunctions as remedies, every law we reviewed allowed complainants to seek injunctions to stop spread of the speech or perhaps to mandate labeling of political speech as AI-generated. Because these laws are generally enforceable by anybody - in California, any registered voter can file a complaint, and in Michigan, possible complainants include the attorney general, any candidate claiming injury, anyone depicted in an ad, or any organization that represents the interests of voters likely to be deceived" by the manipulated content" - it is not hard to imagine a regular march to the courthouse by campaigns or aggrieved voters to seek hearings and gags on ads by candidates they don't like in the run-up to Election Day. Again, because false political speech is broadly protected by the First Amendment, it is unlikely any such gag or injunction would survive a challenge.
- Satire, parody, news media uses: Most legislatures carved out exceptions for areas receiving First Amendment protection already, perhaps noticing the flaws inherent in trying to regulate political speech. These savings clauses exempt classic categories recognized by the Supreme Court, such as humorous depictions of the like protected in Hustler v. Falwell, as well as legitimate news media coverage of AI and deepfakes - which would be necessary in debunking them, part of the counterspeech noted above. Also, in recognition of the broad protection of political speech in New York Times v. Sullivan, some states such as California require a showing of actual malice" - that is, publishing something knowingly false or acting with reckless disregard for the truth - for movants to prevail. Even so, these laws are probably overbroad and unenforceable, but laws without savings clauses such as these are especially problematic.
- Mandatory disclaimers or disclosures: If any of these deepfake/AI provisions are likely to stand, it likely would be requirements that make it mandatory to label such content, something common to most of the state laws we reviewed. For instance, Michigan requires paid political advertisements" to include language that the advertisement was generated in whole or substantially by artificial intelligence," with special details about how the disclosure must be made depending on whether the advertisement was graphic (including photo and video) or audio, with fines of $250 for a first offense and up to $1,000 for additional infractions. Idaho's Freedom from AI-Rigged (FAIR) Elections Act," enacted in 2024, made labeling an affirmative defense, in which people accused of using synthetic media can rebut any civil action by including a prominent disclosure stating This (video/audio) has been manipulated" as detailed in the law. The Supreme Court has upheld disclosure and labeling provisions in election laws in Citizens United and Buckley v. Valeo, finding these were not overly burdensome on political speech.
What we found is state legislators often were trying to outlaw ads that use deepfakes or AI, in this case by using the same template they have used to try to ban false political advertising in the past. And that false advertising template has failed, time and again, when challenged in courts. Outside of mandatory labeling, these laws likely would not survive their first attempt to prosecute someone or to enjoin an ad that runs afoul of the law. Imagine, for example, DeSantis campaign staffers - or DeSantis himself, even - having to fend off criminal prosecution because someone texted or posted a manipulated image of Donald Trump during the Republican primaries. As broad as these laws are written, that would be a possibility.
These laws also seem to be the unsurprising result of a moral panic about AI and deepfakes in elections that has captivated legislators' attention and motivated them to do something, even if that something violates the First Amendment. And this is despite the fact that as of yet, none of these fakes have actually worked. The DeSantis campaign photos, the RNC's fake dystopian future video, and the fake Biden robocall were all caught and debunked quickly and broadly by political opponents and news media.
As we were finalizing this project, a headline in the Washington Post caught our attention: Deepfake Kari Lake Video Shows Coming Chaos of AI in Elections." But in reality, there was no chaos. The video was a ploy by a journalist for the Arizona Agenda to show the potential harm of AI-generated videos, depicting Lake saying Subscribe to the Arizona Agenda for hard-hitting real news... And a preview of the terrifying artificial intelligence coming your way in the next election, like this video, which is an AI deepfake the Arizona Agenda made to show you just how good this technology is getting."
As David Greene noted for EFF in 2018 - we don't need new laws about deepfake and AI speech, because we've already got them. This includes bans on falsely pretending to be a public official, or civil laws regarding right of publicity, defamation, and false light, that have developed over the past century to combat harmful false speech (a category that includes political speech).
Beyond the First Amendment issues with the AI laws, the bans are an attempt to deflect from the fact that our leaders can't agree on a pathway toward a more healthy information ecosystem. In that sense, another way to see the rise of AI and synthetic fakes is that their existence is not the thing to fix but rather a symptom of something much more broken: our civic information infrastructure.
Sunshine is the best disinfectant" is the phrase many communication law students learn about the First Amendment, and indeed it is a nod to the democratic ideal that the solution to bad speech is good speech. What is challenging about synthetic fakes in 2024 is not that they exist, but rather that it's questionable whether we have the technology and platforms in place to give good speech its proper ability to act as a meaningful check against the torrent of synthetic content that can drown out truth as a matter of volume. And the consequences to democracy if we don't figure out a workable solution could be devastating.
Synthetic fakes enter our discourse as part of a troubled information sphere. The classic paradigm is truth grappling with error, in the words of John Milton. In a media context, that means speech platforms that offer us the ability to discern truth by weighing claims and using our reason to decide, first individually and then collectively. But this idea of a speech marketplace has been able to thrive in part because we long had well-regarded sources that were devoted to using their gatekeeping and agenda-setting power to make sure the discourse was populated with a common set of verified information, and that these sources operated in a distribution scarcity environment that gave their words a type of weight and power.
Thirty years ago, this old setup was thriving. But the rise of self-publishing first, social networks second, and now generative AI has created parallel speech platforms that unlike journalism are not business-dependent on truth-telling in the public interest. The conservative radio host using Facebook to spread AI-generated images of Trump being friendly with Black citizens said he wasn't a journalist or pretending that they were real or accurate, just that he was a storyteller."
Coupled with the decimation of local news and the teetering business model for many national outlets, synthetic fakes as a matter of being platformed and as a matter of volume have the ability to drown out the dwindling outlets that decades ago would have been a powerful counterweight to false speech.
Banning fakes could be seen then as more than merely a bad - and likely unconstitutional - idea. Synthetic media and deepfakes are a symptom of a larger problem of our crumbling information environment and the lack of will from tech platforms to finally admit they are in the political discourse business and act as guardians of democratic self-governance. If censorship is off the table because of the First Amendment's protections for lying, then tech platforms have to step up and play the role that journalists have long played: to create with technical solutions and good internal policy a place where the public knows it can find truth in the form of verified facts.
Technical solutions such as Meta labeling AI content in advance of the upcoming U.S. elections and companies working to create a watermarking system for AI images represent good starts. But these become cat-and-mouse games until the companies both creating AI products and those hosting AI content frame policy through the reality that democracy is in deep trouble if it becomes the realization of Steve Bannon's famous maxim that the way to win is to flood the zone with shit."
What this means is getting off the sidelines by not treating this as merely an engineering problem but also a social problem that they have helped create and in the spirit of good citizenship must help us solve. Tech companies and the broader public cannot afford to rely on lawmakers to fix this, not when their only passable ideas seem to be laws that violate free speech rights.
Chip Stewart is an attorney, media law scholar, and professor of journalism at Texas Christian University. He can be found at @medialawprof.bsky.social. Jeremy Littau is an associate professor at Lehigh University whose research focuses on digital media and emerging technology. He can be found at @jeremylittau.bsky.social