Dealing with Deepfakes: Why We Need Ex Ante Solutions

In my last article, I took a positive stance on AI technology, specifically on its use in the legal field—the increases in productivity and wealth that AI technology will bring are undeniable. But, while the use and spread of AI is unstoppable, it can and should be regulated. I now turn to one of the most dangerous capabilities of AI: deep fakes. A deep fake is an image, audio recording, or video that has been manipulated to show people doing or saying something that was not actually said or done 1. The two biggest threats posed by deep fake technology are the creation of porn using the likeness of real humans and the spread of disinformation. It is important that we adapt the law to these new and developing threats. Right now, there are no federal laws regarding deep fakes; many cases brought against deep fake creators are for violations of defamation laws, non-consensual pornography laws, or copyright laws [2]. When it comes to deep fakes, ex post remedies such as these are not sufficient. Responsibility should not solely fall to the victim to bring litigation against deep fake creators. Deep fake porn and deep fake disinformation pose serious threats to individuals and society; the United States must adopt preventative legal measures such as criminalization and a Deep Fakes Taskforce.

Deepfake technology is overwhelmingly used to create pornography. Although the spread of disinformation through deep fake technology seems to garner more attention, 96 percent of deep fakes online are pornographic [3]. When nonconsensual, this deep fake technology forces mainly women into pornography which users can watch and distribute [4]. Celebrities are frequent targets of deep fake porn, but there are many websites and apps available that allow users to insert images of any face into pornographic images and videos [5]. This is a major violation of consent for the victims of these deep fakes, and can endanger the safety and livelihoods of victims. These videos can be used to extort victims or as “revenge porn,” the distribution of sexually explicit images without consent [6]. Additionally, searches for a victim’s name online may result in the discovery of deep fake images or videos, which can harm their reputation and employability [7]. Those who are targeted by these kinds of deep fakes may feel deep shame or embarrassment. Because there are very few states with explicit laws regarding deep fakes, victims may not know that they have legal recourse against creators [8]. It is likely that many victims may not even know that these deep fake videos or images exist on the internet using their likeness. Although they may not feel emotional distress, having these explicit images and videos out there is a major violation of their right to privacy and can cause serious damage to their reputation. Currently, the creation of this harmful deep fake pornography using any image or video is easy and widely accessible—preventative measures are desperately needed to stop this violation of human rights [9].

The spread of disinformation through deep fakes is another major problem posed by new AI technology. Because deep fakes can be so convincing, and because this is a relatively new technology that many are not used to, deep fakes can mislead large numbers of people. For example, recently there have been a number of false news reports posted to TikTok. Many of these fake news stories feature the likeness of well known journalists and appear to be from reputable sources. They look and sound real, but are either partially or completely fabricated. Because these videos are often sensational, they can quickly go viral [10]. People’s opinions may be influenced even by information they know is fake. As the 2024 election looms ahead, the growing number of deep fakes is troubling as fake news that appears real has the potential to undermine genuine sources and influence elections. For example, when a tape came out ahead of the 2020 election of Donald Trump admitting to grabbing women’s genitals, he later denied that that recording was real [11]. When fake information, including credible looking audio, videos, and images, become prevalent, it becomes difficult to determine what is real and what is fake, which can break down systems of societal trust [12]. For these reasons, we must stop these deep fakes at their source rather than waiting for them to go viral and influence people and events before we take legal action to remove them.

Pursuing litigation is time-consuming and expensive, so the government should adopt ex ante legal solutions which prevent and criminalize the spread of harmful deep fakes. Settling issues regarding deep fakes, especially pornographic deep fakes, can be a dreadful experience for the victim of the material. It opens them up to scrutiny and reminds them over and over again about the pain they have been through [13]. In most cases, the damage of the deep fake has already been done. Often, the victim’s reputation has been tarnished, they have been traumatized, or the beliefs of viewers of the deep fake have been affected. To combat this issue, the United States government should develop solutions that work to stop this problem at its roots, and should do so quickly. By requiring disclosure of fabricated content, banning the use of nonconsenting people in pornographic deep fakes, and establishing a task force that pulls together various governmental agencies to take down illegal deep fakes, the government can do a great deal to reign in this issue. Passing the “DEEP FAKES Accountability Act,” proposed in 2019 by New York Representative Yvette Clark would be a good start [14]. This bill, which has not moved since being referred to a subcommittee, would require a watermark be put on all deep fakes disclosing their fabricated nature [15]. It lists penalties for not properly disclosing this fabrication, privacy protections for those featured in deep fakes, and exceptions for deep fakes that do not appear to be real such as fictional television, movies, music, etc. It would also establish assistance for the victims of deep fakes. Most importantly, it would create a Deep Fakes Task Force within the Department of Homeland Security that would seek to develop a technology to identify and combat deep fakes (this technology, although imperfect, exists today) [16]. While passing this bill would be a good first step, more should be done to combat the creation and spread of deep fakes. This bill focuses on disinformation and does less to stop the production of nonconsensual pornographic material. The task force should not only focus on the removal of deep fakes before they spread, but should also work on shutting down sites and apps that allow users to create nonconsensual pornography.

Some may think that the proposed regulation of deep fakes violates the First Amendment. As a form of expression, many deepfakes are protected by the First Amendment [17]. However, the proposed ex ante solutions do not suppose that all deep fakes, even those meant to spread disinformation, should be banned. Instead, the solutions require creators to disclose the nature of the material they develop in the form of watermarks. However, it is important to note that all nonconsensual pornography as well as websites that assist in its creation should be illegal. Section 230 of the 1996 Communications Decency Act, which is often cited in the deep fake debate, says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[18] In this case, however, websites and apps are taking part in the creation of the content, often providing the porn which people insert images and faces into. As such, the law should treat them as producers of the material.

Ex ante solutions to combat deep fake technology should be implemented alongside remedial methods to ameliorate harm caused by deep fakes. Nonconsensual pornographic deep fakes violate the rights of victims and causes them emotional distress and reputational damage which is not undone through litigation. Because this material is likely often unknown to the victim and the costs of pursuing litigation are high, justice is often not served against those who create and spread this kind of explicit material and it may remain on the internet. Further, disinformation in the form of deep fakes, once it is spread, can do irreparable damage to the reputation of a person or to public opinion. The spread of this disinformation erodes people’s trust in all information they receive, and ex post litigation can not reverse this damage. The solution to these problems is to stop deepfakes before they are made or before they spread by taking down websites that create them and identifying and removing those that spread false information without the disclosure that they are fabricated. If these solutions are not implemented soon, deep fakes may wreak havoc in upcoming elections and influence major events. The growing deep fake porn industry will do a great deal of damage to many people’s lives. We need new laws to stop the immense amount of damage this technology has the potential to do.

Bibliography

  1. Merriam-Webster, “Definition of Deepfake,” www.merriam-webster.com, accessed October 15, 2023, https://www.merriam-webster.com/dictionary/deepfake.

  2. Caroline Quirk, “The High Stakes of Deepfakes: The Growing Necessity of Federal Legislation to Regulate This Rapidly Evolving Technology,” Princeton Legal Journal, June 19, 2023, https://legaljournal.princeton.edu/the-high-stakes-of-deepfakes-the-growing-necessity-of-federal-legislation-to-regulate-this-rapidly-evolving-technology/#:~:text=In%20most%20cases%2C%20deepfakes%20may. ; Moncarol Wang, “Don’t Believe Your Eyes: Fighting Deepfaked Nonconsensual Pornography with Tort Law ,” The University of Chicago Legal Forum (The University of Chicago), accessed October 15, 2023, https://legal-forum.uchicago.edu/print-archive/dont-believe-your-eyes-fighting-deepfaked-nonconsensual-pornography-tort-law#heading-5.

  3. Moira Donegan, “Demand for Deepfake Pornography Is Exploding. We Aren’t Ready for This Assault on Consent,” The Guardian, March 13, 2023, sec. Opinion, https://www.theguardian.com/commentisfree/2023/mar/13/deepfake-pornography-explosion.

  4. Ibid.

  5. Kat Tenbarge, “Found through Google, Bought with Visa and Mastercard: Inside the Deepfake Porn Economy,” NBC News, March 27, 2023, https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake-economy-google-visa-mastercard-download-rcna75071. ; Moncarol Wang, “Don’t Believe Your Eyes: Fighting Deepfaked Nonconsensual Pornography with Tort Law ,” The University of Chicago Legal Forum (The University of Chicago), accessed October 15, 2023, https://legal-forum.uchicago.edu/print-archive/dont-believe-your-eyes-fighting-deepfaked-nonconsensual-pornography-tort-law#heading-5.

  6. Sen Nguyen, “What Is ‘Revenge Porn’ and Are There Laws to Protect You? Here’s What You Need to Know,” CNN, accessed October 15, 2023, https://www.cnn.com/interactive/asequals/revenge-porn-protect-yourself-as-equals-intl-cmd/.

  7. Moncarol Wang, “Don’t Believe Your Eyes: Fighting Deepfaked Nonconsensual Pornography with Tort Law ,” The University of Chicago Legal Forum (The University of Chicago), accessed October 15, 2023, https://legal-forum.uchicago.edu/print-archive/dont-believe-your-eyes-fighting-deepfaked-nonconsensual-pornography-tort-law#heading-5.

  8. Moira Donegan, “Demand for Deepfake Pornography Is Exploding. We Aren’t Ready for This Assault on Consent,” The Guardian, March 13, 2023, sec. Opinion, https://www.theguardian.com/commentisfree/2023/mar/13/deepfake-pornography-explosion.

  9. Kat Tenbarge, “Found through Google, Bought with Visa and Mastercard: Inside the Deepfake Porn Economy,” NBC News, March 27, 2023, https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake-economy-google-visa-mastercard-download-rcna75071.;

  10. Alexandra Levine, “In a New Era of Deepfakes, AI Makes Real News Anchors Report Fake Stories,” Forbes, October 12, 2023, https://www.forbes.com/sites/alexandralevine/2023/10/12/in-a-new-era-of-deepfakes-ai-makes-real-news-anchors-report-fake-stories/?sh=2fed1f8e57af.

  11. Ian Sample, “What Are Deepfakes – and How Can You Spot Them?,” The Guardian, January 13, 2020, https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them.

  12. Ibid.

  13. Moncarol Wang, “Don’t Believe Your Eyes: Fighting Deepfaked Nonconsensual Pornography with Tort Law ,” The University of Chicago Legal Forum (The University of Chicago), accessed October 15, 2023, https://legal-forum.uchicago.edu/print-archive/dont-believe-your-eyes-fighting-deepfaked-nonconsensual-pornography-tort-law#heading-5.

  14. Ibid.

  15. “H.R.3230 - DEEP FAKES Accountability Act,” Congress.gov (116th Congress, June 12, 2019), https://www.congress.gov/bill/116th-congress/house-bill/3230/all-actions.

  16. “H.R. 3230 (116th): Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019,” GovTrack, accessed October 15, 2023, https://www.govtrack.us/congress/bills/116/hr3230/text/ih.; “Intel Introduces Real-Time Deepfake Detector,” Intel, November 14, 2022, https://www.intel.com/content/www/us/en/newsroom/news/intel-introduces-real-time-deepfake-detector.html.

  17. Caroline Quirk, “The High Stakes of Deepfakes: The Growing Necessity of Federal Legislation to Regulate This Rapidly Evolving Technology,” Princeton Legal Journal, June 19, 2023, https://legaljournal.princeton.edu/the-high-stakes-of-deepfakes-the-growing-necessity-of-federal-legislation-to-regulate-this-rapidly-evolving-technology/#:~:text=In%20most%20cases%2C%20deepfakes%20may.

  18. Ibid; Moncarol Wang, “Don’t Believe Your Eyes: Fighting Deepfaked Nonconsensual Pornography with Tort Law ,” The University of Chicago Legal Forum (The University of Chicago), accessed October 15, 2023, https://legal-forum.uchicago.edu/print-archive/dont-believe-your-eyes-fighting-deepfaked-nonconsensual-pornography-tort-law#heading-5; Barbara Ortutay, “What You Should Know about Section 230, the Rule That Shaped Today’s Internet,” PBS NewsHour, February 21, 2023, https://www.pbs.org/newshour/politics/what-you-should-know-about-section-230-the-rule-that-shaped-todays-internet.

Isabel Skomro

Isabel Skomro is a member of the Harvard Class of 2024 and a HULR Staff Writer.

Previous
Previous

Expanding Legal Pathways for Dreamers 

Next
Next

This Conversation is Not Over: Benefits on the Legalization of *Some* Drugs and Its Impact on the Fentanyl Crisis