Digital Doppelgängers: Rewriting Free Speech for the Age of AI
Introduction
The First Amendment enshrines freedom of speech as fundamental to democratic governance, grounded in the assumption that open discourse enables collective self-rule.1 That protection has historically rested on several interlocking premises: that speech is attributable to a speaker, expressive of human intent, and constrained by material and social costs.2 These premises explain both the Amendment’s broad scope and the narrowness of its categorical exceptions—defamation, fraud, incitement, true threats, and obscenity—each of which presupposes a human actor whose conduct and mental state can be evaluated and constrained without destabilizing the broader marketplace of ideas. Today, that framework meets its starkest challenge: AI-generated hyperrealistic audiovisual fabrications, commonly known as deepfakes, that disseminate false speech at scale without authorship, intent, or human limitation.3 Thus, to properly acknowledge that an amendment drafted in 1791 cannot stretch indefinitely to cover modern technology and preserve the protection of robust debate, U.S. free speech law must recognize a narrowly tailored, harm-based exception for malicious AI-generated impersonations.
I. The First Amendment’s Original Assumptions
When the First Amendment was adopted, speaking, printing, or broadcasting involved tangible human effort. Today, the Supreme Court’s seminal free-speech tests embed that assumption. Brandenburg v. Ohio (1969) established that the state may punish advocacy only when it is “directed to inciting or producing imminent lawless action” and likely to produce it—a test dependent on imputing intent to a speaker.4 New York Times Co. v. Sullivan (1964) imposed the
1 Vincent Blasi, “The Checking Value in First Amendment Theory,” American Bar Foundation Research Journal 3, no. 1 (1978): 521–64, https://www.jstor.org/stable/pdf/827945.pdf.
2 David Atkinson, Jena D. Hwang & Jacob R. Morrison, Intentionally Unintentional: GenAI Exceptionalism and the First Amendment, Cornell University (June 5, 2025), https://arxiv.org/abs/2506.05211.
3 Robert Chesney & Danielle Citron, “Deepfakes and the New Disinformation War,” Foreign Af airs (December 11, 2018), https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new-disinformation-war. 4 Brandenburg v. Ohio, 395 U.S. 444 (1969),
https://tile.loc.gov/storage-services/service/ll/usrep/usrep395/usrep395444/usrep395444.pdf.
“actual malice” standard for public-figure defamation, protecting erroneous statements unless made with knowledge of falsity or reckless disregard.5 United States v. Alvarez (2012) declined to create a new category of criminally punishable lies, emphasizing that false statements remain protected absent a traditionally unprotected category or demonstrable harm.6 These cases collectively show Three Things: (1) the Court protects false or offensive speech to avoid chilling robust debate; (2) exceptions hinge on speaker culpability and tangible harm; and (3) the Court resists novel, content-based criminalization unless the speech functions like conduct. As one recent analysis notes, under traditional jurisprudence “an idea becomes speech when it’s spoken by a speaker: words actually written, a speech actually given, brushstrokes actually painted. The concept is so plain that it has hardly merited any scrutiny.”7 This assumption of a speaker means the Constitution has never before been asked to protect nor punish wholly machine-made content. However, Generative-AI models now possess the capability to produce photorealistic video and audio “speech” at scale, with minimal human intervention and no clear author.8 The resulting content lacks a conventional speaker, lacks traceable intent, and evades the “cost” barrier that once limited falsity. The doctrinal scaffolding of the First Amendment — built around human speech — therefore begins to crumble.
II. Harms to National Security and Democratic Participation Under the Status Quo The distinctive harms of deepfakes flow from their plausibility and scale. In March 2022, a sophisticated deepfake video portraying Ukrainian President Volodymyr Zelenskyy urging
5 New York Times Co. v. Sullivan, 376 U.S. 254 (1964),
https://tile.loc.gov/storage-services/service/ll/usrep/usrep376/usrep376254/usrep376254.pdf.6 United States v. Alvarez, 567 U.S. 709 (2012), https://supreme.justia.com/cases/federal/us/567/709/. 7 Mackenzie Austin and Max Levy, Speech Certainty: Algorithmic Speech and the Limits of the First Amendment, 77 Stan. L. Rev. 1 (2025),
https://review.law.stanford.edu/wp-content/uploads/sites/3/2025/01/Austin-Levy-77-Stan.-L.-Rev.-1.pdf. 8 David Atkinson, Jena D. Hwang & Jacob R. Morrison, Intentionally Unintentional: GenAI Exceptionalism and the First Amendment, Cornell University (June 5, 2025), https://arxiv.org/abs/2506.05211.
surrender was disseminated on hacked Ukrainian TV and social media.9 Although quickly debunked, it played on panic and aimed to demoralize Ukrainian defenders. Weaponized synthetic media can therefore function as force multipliers in gray-zone operations—below the threshold of armed attack but aimed at coercion and destabilization.10 Similar misleading tactics surfaced in Taiwan’s 2024 presidential election, where networks of falsified political content sought to manipulate public opinion and inflame social divisions.11 Analysts found election-period social media accounts linked to influence operations posting coordinated videos and posts that capitalized on AI-generated imagery and voice misattribution to shape partisan narratives, with some accounts branded as “content hubs” that pushed politically charged material ahead of voting.12 Although many such videos were of uneven technical quality and swiftly challenged by media literacy efforts, Taiwanese experts warned that even low-fidelity synthetic content could compound existing social fissures by amplifying doubt, division, and confusion among voters.13 Robert Chesney, a national security law scholar and Dean at the University of Texas School of Law,14 and Danielle Keats Citron, a leading constitutional and cyberlaw scholar at the University of Virginia School of Law,15 have cataloged these democratic risks with particular clarity, emphasizing that deepfakes distort policy deliberation, manipulate
9 Deepfake video of Zelenskyy could be ‘tip of the iceberg’ in info war, experts warn,” KPBS Public Media, March 16, 2022,
kpbs.org/news/national/2022/03/16/deepfake-video-of-zelenskyy-could-be-tip-of-the-iceberg-in-info-war-experts-w arn.
10 Michael C. Horowitz, Henry R. Harknett & Matthew Fuhrmann, The Weaponisation of Deepfakes, International Centre for Counter-Terrorism – The Hague, December 2023,
https://icct.nl/sites/default/files/2023-12/The%20Weaponisation%20of%20Deepfakes.pdf. 11Chen-ling Hung, “AI and Disinformation in Taiwan’s 2024 Election,” Thomson Foundation, https://www.thomsonfoundation.org/latest/ai-and-disinformation-in-taiwan-s-2024-election/. 12 Fake social media accounts are targeting Taiwan’s presidential election, Georgia Public Broadcasting (Dec. 13, 2023), gpb.org/news/2023/12/13/fake-social-media-accounts-are-targeting-taiwans-presidential-election. 13 Chen-ling Hung, “AI and Disinformation in Taiwan’s 2024 Election,” Thomson Foundation, https://www.thomsonfoundation.org/latest/ai-and-disinformation-in-taiwan-s-2024-election/. 14 Robert M. Chesney, Faculty Profile, University of Texas School of Law,
https://law.utexas.edu/faculty/robert-m-chesney/.
15 Danielle K. Citron, Faculty Profile, University of Virginia School of Law,
law.virginia.edu/faculty/profile/uqg7tt/2964150.
electoral processes, and corrode institutional legitimacy by undermining public confidence in audiovisual evidence.16 Deepfakes therefore impose a structural harm on democratic self-governance, one that First Amendment jurisprudence currently lacks the tools to adequately confront.
III. AI‑Generated Deepfakes as an Unprotected Category
A narrow unprotected category for harmful AI-generated deepfakes would align with the Court’s historical practice of recognizing exceptions where expression generates socially unmanageable harms that existing doctrine cannot absorb. Obscenity, defamation, and fraud emerged not from hostility toward free speech, but from recognition that certain forms of expression operate as conduits for injury that overwhelm post hoc remedies and destabilize legal ordering.17 Deepfakes present a comparable problem, where synthetic audiovisual fabrications sever the link between speech and speaker and exploit institutional trust in audiovisual evidence. A coherent category would therefore target deepfakes that (1) present falsified content as real, (2) concern identifiable persons or institutions, and (3) generate materially foreseeable harm to individual rights or democratic processes. Such tailoring preserves constitutional breathing room for parody, artistic experimentation, and disclosed synthetic media, while capturing deception deployed at scale.
Conclusion
Constitutional objections grounded in overbreadth and chilling effects warrant serious engagement, but they are not insurmountable. Comparative experience underscores that no democratic system has yet solved the deepfake problem, but several have begun grappling with it in meaningful ways. The European Union’s AI Act and Taiwan’s election-integrity measures
16 Robert Chesney & Danielle Citron, “Deepfakes and the New Disinformation War,” Foreign Af airs (December 11, 2018), https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new-disinformation-war.17 The City University of New York, “Free Speech,” CUNY Of ice of Student Af airs,
https://www.cuny.edu/about/administration/offices/student-affairs/our-cuny/free-speech/.
represent early, imperfect attempts to address synthetic media as a structural threat to democratic processes, offering examples of how regulatory frameworks can evolve alongside rapidly changing technologies.1819 Although these regimes do not offer a blueprint for the U.S., they nevertheless demonstrate that democracies can intervene upstream without collapsing into censorship. Ultimately, the United States must modernize its constitutional framework to confront the structural challenges of synthetic media, recalibrating First Amendment protections for a technological era the Framers could never have imagined.
18 European Union, Artificial Intelligence Act, Article 50 on transparency obligations for AI systems that manipulate image, audio, or video content, (2024), https://artificialintelligenceact.eu/article/50/.
19 Global Taiwan Brief, Taiwan’s 2023 4C Strategy Against AI-Enhanced Disinformation, Global Taiwan Institute (Oct. 2024), https://globaltaiwan.org/wp-content/uploads/2024/10/GTB-9.20-PDF.pdf.
Bibliography
Atkinson, David, Jena D. Hwang, and Jacob R. Morrison. Intentionally Unintentional: GenAI Exceptionalism and the First Amendment. Cornell University, June 5, 2025. https://arxiv.org/abs/2506.05211.
Austin, Mackenzie, and Max Levy. “Speech Certainty: Algorithmic Speech and the Limits of the First Amendment.” Stanford Law Review 77 (2025): 1–66.
https://review.law.stanford.edu/wp-content/uploads/sites/3/2025/01/Austin-Levy-77-Stan. -L.-Rev.-1.pdf.
Blasi, Vincent. “The Checking Value in First Amendment Theory.” American Bar Foundation Research Journal 3, no. 1 (1978): 521–664. https://www.jstor.org/stable/pdf/827945.pdf.
Brandenburg v. Ohio, 395 U.S. 444 (1969).
https://tile.loc.gov/storage-services/service/ll/usrep/usrep395/usrep395444/usrep395444.p df.
Chesney, Robert, and Danielle Keats Citron. “Deepfakes and the New Disinformation War.” Foreign Affairs, December 11, 2018.
foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new-disinformation-war.
Citron, Danielle K. Faculty Profile. University of Virginia School of Law. https://law.virginia.edu/faculty/profile/uqg7tt/2964150.
European Union. Artificial Intelligence Act. Article 50: Transparency Obligations for Certain AI Systems, 2024. https://artificialintelligenceact.eu/article/50/.
Global Taiwan Institute. Taiwan’s 2023 4C Strategy Against AI-Enhanced Disinformation. Global Taiwan Brief, October 2024.
https://globaltaiwan.org/wp-content/uploads/2024/10/GTB-9.20-PDF.pdf.
Horowitz, Michael C., Henry R. Harknett, and Matthew Fuhrmann. The Weaponisation of Deepfakes. International Centre for Counter-Terrorism – The Hague, December 2023. https://icct.nl/sites/default/files/2023-12/The%20Weaponisation%20of%20Deepfakes.pdf .
Hung, Chen-ling. “AI and Disinformation in Taiwan’s 2024 Election.” Thomson Foundation. thomsonfoundation.org/latest/ai-and-disinformation-in-taiwan-s-2024-election/.
KPBS Public Media. “Deepfake Video of Zelenskyy Could Be ‘Tip of the Iceberg’ in Info War, Experts Warn.” March 16, 2022.
kpbs.org/news/national/2022/03/16/deepfake-video-of-zelenskyy-could-be-tip-of-the-iceb erg-in-info-war-experts-warn.
New York Times Co. v. Sullivan, 376 U.S. 254 (1964).
https://tile.loc.gov/storage-services/service/ll/usrep/usrep376/usrep376254/usrep376254.p df.
The City University of New York. “Free Speech.” CUNY Office of Student Affairs. cuny.edu/about/administration/offices/student-affairs/our-cuny/free-speech/.
United States Supreme Court. United States v. Alvarez, 567 U.S. 709 (2012). https://supreme.justia.com/cases/federal/us/567/709/.
University of Texas School of Law. “Robert M. Chesney.” Faculty Profile. https://law.utexas.edu/faculty/robert-m-chesney/.