A Problem with Few Deterrents: Difficulties in Prosecuting Illicit "Deep-Fakes"
In March 2021, the release of several TikTok videos featuring Tom Cruise performing magic tricks with a coin sparked a viral sensation. The mercurial Cruise has been known to make controversial public displays, but these videos caught the eye of the public because the images of the actor were inauthentic: they had been digitally edited as part of a “deep-fake,” or digital content altered to misrepresent the subject as having said or done something that they had not.1 While this particular video had few negative consequences, deep-fakes have the potential to cause severe harm to those misrepresented. As a result, several state legislatures have passed bills to criminalize the use of deep-fakes without either the express permission of individuals depicted or plainly visible disclaimers that the videos have been edited. In fact, a bill criminalizing the use of deep-fakes without these disclaimers was introduced in Congress in 2019, though it has since stalled.2 Yet even with the passage of this legislation, these forms of content pose a serious challenge for the American legal system for several reasons. Among these are the fact that copyright and tort suits against creators of deep fakes are likely to meet robust challenges through free speech claims and that it is very difficult to track the origin of each piece of content. Furthermore, even with established punishment for deep fakes, responses from the legal system can be too slow to alleviate damages, thus strengthening the need for serious deterrence. Because of the legal and technological handicaps to enforcement, the solution to deep-fakes might require a widespread re-adjustment of what it means to trust digital content rather than solely passing legislation criminalizing deep-fakes.
The ability of criminal and civil defense lawyers to use the umbrella of free speech to protect deep-fakes gives would-be violators considerable leeway in the content they can produce. The First Amendment may pose a major barrier to litigation against creators of deep-fakes. In United States v. Alvarez (2012), the Supreme Court ruled that the Constitution does not give the government the power to restrict expression because of its content.3 As a result, content-based restrictions are presumed to be unconstitutional.4 This presumed illegality of content-based restrictions would likely apply to deep-fakes as well. The Court ruled that false statements could not be restricted, except in cases of defamation and true threats, where it is only possible for a victim to recuperate damages if there is a known creator—which is quite difficult in the case of deep-fakes. Furthermore, attempts to litigate the improper and unauthorized use of one’s likeness under copyright law or a right to privacy require that the complainant could have “anticipated financial gain” from the use of that likeness.5 Therefore, in cases where a deep-fake has not caused measureable commercial damages to the plaintiff, they cannot recoup these damages under right of publicity claims, a major conduit for intellectual property suits.6 With many of the common vehicles to combat free-speech defenses being hampered, defendants in civil cases have many protections in their favor.
Just as the First Amendment poses obstacles to privacy and tort claims, the difficulty in determining a deep-fake’s origin makes it hard to create significant deterrents. A deep-fake is created by using machine learning or artificial intelligence to create facial recognition algorithms. These algorithms form a neural network, which trains itself to identify a given person’s face, and then subsequently creates a new copy of the person’s face, which it replaces in the image.7 These neural networks can also replicate an individual’s voice, which further adds to the seeming authenticity of a deep-fake. Because the code used for these programs is widely available online and easily replicable, virtually anyone can produce a deep-fake.8 Once a deep-fake has been re-posted and shared on social media, the image’s origins become even more obscured. If it is impossible to prove that a violator of anti-deep-fake legislation created the deep-fake, it will be quite difficult to identify and prosecute these violators. Until it is possible to reliably identify creators of illicit deep-fakes, legislation criminalizing their creation will prove toothless.
Recent efforts by major social media platforms to develop technology to identify deep-fakes are laudable for their accuracy in identifying the source code, but they are still unable to effectively detect or pinpoint the identity of the person or people who created the image. Based on the understanding that tracking the origins of deep-fakes is critical to their deterrence, companies like Meta and Microsoft have created artificial intelligence to reverse-engineer deep-fakes in order to determine the models that created them. Meta’s project in particular has demonstrated 70 percent accuracy in initial attempts to identify deep-fakes and the models used to build them, and they have partnered with several institutions to improve their detection software.9 However, they only have the capability to identify the model used to produce the content—which could be widely accessible on public platforms— and not necessarily the user who created the deep-fake. In addition, attempts to restrict the use of these models on social media platforms might also trigger lawsuits due to the same freedom of expression protections that hamper tort lawsuits against illicit deep-fake creators.
Given the many ways that creators of illicit deep-fakes can evade enforcement, it is difficult to come up with one policy that will prevent their creation. Instead, the problem necessitates a mix of policy and general skepticism toward videos depicting individuals in a negative light. The first of these would be continued government funding of efforts to improve deep-fake detection software from both federal agencies and large tech companies. One of the largest risks posed by deep-fakes is their potential spread over social media; the creation of extremely accurate automatic detection software could prevent their spread entirely. Furthermore, the companies creating the detection algorithms should provide licensed access to governments, courts, and other certified entities who may be compromised by edited content. For instance, courts should prepare to be presented with fraudulently manipulated content as evidence. Yet until any effective software becomes available, they should view such content suspiciously and consult experts on detecting deepfakes to determine if evidence contains any hallmark signs of deep-fakes. More importantly, the present difficulty in legislating deep-fakes demands non-legislative solutions, namely a widespread skepticism toward improbable or explosive content. Some of this skepticism might develop naturally, as deep-fakes become more common. Regardless, over 14,000 pieces of deep-fake content have already been published as of 2020, and until there are effective deterrents against them, citizens must be increasingly wary of digital content.10
References
1 Scott Stump, “Tom Cruise Deepfake Videos on TikTok Leave People Baffled,” Today, (2021).
2 Yvette D. Clark, DEEP FAKES Accountability Act, Pub. L. No. H.R.3230 (2019).
3 "United States v. Alvarez," Oyez, Accessed March 16, 2022. https://www.oyez.org/cases/2011/11-210.
4 Shannon Reid, “THE DEEPFAKE DILEMMA: RECONCILING PRIVACY AND FIRST AMENDMENT PROTECTIONS,” (University of Pennsylvania Journal of Constitutional Law, 2021), 23 (1): 209.
5 O'Bannon v. National Collegiate Athletic Association, 802 F. 3d 1049, (9th Cir., 2015).
6 Kelsey Farish,“Do Deepfakes Pose a Golden Opportunity? Considering Whether English Law Should Adopt California's Publicity Right in the Age of the Deepfake,” (Journal of Intellectual Property Law & Practice, 2020), 15 (1): 40–48. https://doi.org/10.1093/jiplp/jpz139.
7 Tal Hassner and Xiaoming Liu, “Reverse Engineering Generative Models from a Single Deepfake Image,” Meta AI (2021).
8 Meredith Somers, “Deepfakes, Explained,” MIT Sloan, 2020.
9 Ibid.
10 Ibid.