The Legal Gray Zone of Deepfake Political Speech

, , , ,

24 Oct 2025

(Source)

The Rise of Deepfakes in the 2024 Election Cycle

The 2024 U.S. election cycle has been defined not only by fierce partisanship and record-breaking campaign spending, but also by the rise of a new, destabilizing force: artificial intelligence-generated deepfakes. Synthetic videos and audio of politicians have become a fixture in online discourse, often spreading faster than corrections can catch up. While these tools can, in theory, democratize expression and satire, they also pose unprecedented risks for electoral integrity. The law has lagged behind the progression of this technology, leaving regulators, platforms, and courts struggling to balance free expression against the need to protect voters from deception.

A Global Surge in Synthetic Manipulation

The sheer scale of this phenomenon is striking. A 2024 report by the cybersecurity firm Recorded Future documented 82 pieces of AI-generated deepfake content targeting public figures across 38 countries in a single year, with a disproportionate number focused on elections. The Political Deepfakes Incidents Database, a new initiative designed to track synthetically generated political media, demonstrates how quickly and broadly these manipulations diffuse across platforms.

Public anxiety mirrors these developments. A 2023 YouGov survey found that 85% of Americans were either “very” or “somewhat” concerned about the spread of misleading video and audio deepfakes. A 2024 multinational survey conducted by the identity verification firm Jumio found that 72% of Americans believed deepfakes would influence elections, and 70% reported feeling more skeptical toward online content overall. The net effect is a dual crisis: a surge in the supply of manipulative content and a decline in the public’s baseline trust in political communication.

State Legislative Experiments

In the face of rising public concern, state legislatures have begun to experiment. According to Ballotpedia, more than 25 bills related to deepfakes have been enacted across the states this year. In 2024, California passed the Defending Democracy from Deepfake Deception Act, which required platforms to block or label AI-generated political content during the 120-day period leading up to an election. The law also created a private right of action for candidates to sue creators or distributors of offending content.

However, the law was quickly challenged. Right-wing content creator Chris Kohls filed suit, and in August 2025, a federal judge struck down portions of California’s law. The court held that key provisions conflicted with Section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content. The court also signaled that a companion labeling requirement for digital campaign ads was likely unconstitutional, characterizing the measure as an overly broad “censorship law” unlikely to survive First Amendment scrutiny.

Minnesota enacted a similar ban in 2023, prohibiting the dissemination of deepfakes that could mislead voters. X (formerly Twitter) sued to block the law, arguing it violated free speech and conflicted with platform immunity under Section 230. While the litigation is ongoing, early rulings suggest courts remain skeptical of sweeping prohibitions on political deepfakes, particularly when they risk chilling satire or artistic expression. Meanwhile, the House of Representatives passed the “One Big Beautiful” bill in May 2025, which would impose a ten-year moratorium on state-level AI laws. Supporters argue that this federal preemption is necessary to avoid a patchwork of conflicting regulations, while critics contend it leaves elections vulnerable in the absence of meaningful national protections.

Legal Fault Lines and Constitutional Tensions

These developments highlight the deep legal fault lines at play in regulating political deepfakes.    State-level regulations are consistently constrained by Section 230, as even carefully drafted state laws may be preempted by the broad immunity that statute affords online platforms. Courts have also shown restraint toward election-specific content mandates, finding that labeling or takedown obligations for election-related synthetic content risk being deemed overbroad. With favorable judicial precedents, platforms have significant leverage to resist aggressive content mandates, particularly those implicating political speech. This fragmented legal landscape creates opportunities for forum shopping, allowing bad actors to exploit jurisdictions with weaker oversight or enforcement.

At the heart of the problem lies the First Amendment. Unlike fraud, obscenity, or true threats, political speech, even when misleading, enjoys robust constitutional protection. Courts are thus wary of laws that attempt to ban “false” political speech, no matter how technologically novel. Yet the stakes feel different when synthetic media is involved. Deepfakes can convincingly depict a candidate uttering words they never spoke or engaging in conduct that never occurred. Unlike traditional campaign lies, these fictions weaponize the visual and auditory cues people instinctively trust. As one commentator in Forbes put it, deepfakes pose “a unique kind of disinformation threat—more visceral, more immediate, and harder to disprove.”

Scholarly Proposals and Theoretical Approaches

Legal scholars have begun to grapple with this tension. Jacob Bourgault argues that existing free speech exceptions are too narrow to regulate synthetic media effectively, suggesting that courts expand the constitutional privacy doctrine or apply false-light frameworks. A Michigan Law Review note, Destined to Deceive, proposes a “foreseeable harm” standard tailored to deepfakes, designed to survive strict constitutional scrutiny.

Other scholars propose conceptualizing deepfakes as “non-testimonial falsehoods,” forms of speech less deserving of robust First Amendment protection. Still, they caution against sweeping prohibitions that might inadvertently criminalize parody, satire, or artistic expression—longstanding traditions in American political discourse. The challenge, then, is to craft a legal regime that targets deceptive electoral manipulation without flattening protected traditions of political humor and critique. Shows like Saturday Night Live or political cartoons might easily fall within the scope of overly broad deepfake laws if not carefully exempted.

Emerging Policy Models

Several policy models have emerged from this debate. One approach emphasizes mandatory disclosure and labeling. Rather than outright bans, such measures would require creators and platforms to disclose when content is AI-generated. This model parallels campaign finance disclosure rules: voters retain access to information but can better assess its provenance. Another approach shifts responsibility toward platforms, arguing that they are best positioned to detect and moderate synthetic content at scale. However, Section 230 remains a formidable shield, and imposing duties on platforms risks both constitutional challenges and logistical impossibility.

To preserve expressive traditions, some advocates propose safe harbors for satire and parody, ensuring that enforcement targets only deceptive content designed to mislead voters. Finally, some experts call for federal baseline standards. A federal framework could prevent the fragmentation seen in state approaches, harmonizing rules across jurisdictions. The Brennan Center for Justice has proposed such a model, which would require disclosure and ban only those deepfakes that deceptively depict candidates in the lead-up to elections.

Each of these approaches carries tradeoffs. Labeling requirements may empower voters but rely heavily on detection accuracy. Platform accountability could strengthen moderation but risks entrenching private censorship. Federal baselines might reduce forum-shopping, yet designing rules narrow enough to survive constitutional scrutiny remains a formidable task.

The Path Forward

As the 2026 election cycle approaches, the question is no longer whether deepfakes will play a role, but how resilient our legal frameworks will be in the face of their proliferation. The litigation in California and Minnesota underscores the limits of state-level experimentation and the difficulty of crafting rules that can withstand First Amendment scrutiny. At the same time, doing nothing risks eroding baseline trust in political communication and leaving voters adrift in an environment where seeing is no longer believing.

The path forward is unlikely to rest on sweeping bans. Instead, narrowly tailored disclosure requirements, safe harbors for parody and satire, and a harmonized federal baseline may provide a more sustainable balance between liberty and truth. Whether these measures can be implemented without chilling core political expression will test not only lawmakers but also the courts and platforms that mediate public debate. If the law continues to lag behind technological manipulation, the price may not be a single election, but the very legitimacy of democratic participation itself.

Yubin Kim, The Legal Gray Zone of Deepfake Political Speech, Cornell J.L. & Pub. Pol’y, The Issue Spotter, (Oct. 24, 2025), https://publications.lawschool.cornell.edu/jlpp/2025/10/24/the-legal-gray-zone-of-deepfake-political-speech/.

About the Author

Yubin Kim is a second-year law student at Cornell Law School. She graduated from the University of California, Santa Barbara with a degree in Psychological & Brain Sciences. Aside from her involvement with Cornell Law School’s Journal of Law and Public Policy, she is Co-President of the Women of Color Collective, Social Chair of the Asian Pacific American Law Students Association, and Vice President of Outreach for the Antitrust Club.