An Analysis of Privacy Rights in the United States in the Age of AI
Artificial intelligence (AI), which relies on enormous quantities of various types of digital information in its algorithms, has raised legitimate concerns about consumer and privacy rights [1]. Although the right to privacy is not enumerated in the U.S. Constitution, it can be inferred via a progressivist or living constitutionalist lens. Digital software and technologies have significantly progressed since the 18th century and particularly since the late 20th century. As such, the United States requires stronger and more comprehensive federal support for the right to privacy, with an emphasis on digital privacy, as the country enters an era of mass digital information and developing artificial intelligence. Federal regulation, including measures such as the constitutional codification of the right to privacy with respect to digital identities or data, must ensure corporate digital responsibility and emphasize human rights in future AI regulation; such a framework is more necessary than ever to ensure the safety and security of all Americans in an ever-evolving digital landscape.
The Right to Privacy in the United States: A Brief History
Privacy is commonly associated with one’s image and sensitive information as qualified under American law, which includes personal identifying information such as an individual’s address or Social Security Number [2]. The Constitution does not mention the “right to privacy” verbatim. Instead, with a living constitutionalist perspective, which holds that the Constitution “adapts to changing circumstances and evolves over time,” the right to privacy can be inferred from the First, Fourth, Ninth, and Fourteenth Amendments of the Constitution [3]. The Fourth Amendment in particular offers the strongest federal support for a right to privacy. While this Amendment deals with privacy violations against citizens specifically by the government in terms of unreasonable search and seizure, it is also commonly interpreted as protecting a more general right to privacy for all Americans [4]. Despite the possibility of inferring the right to privacy from this text, it is important to note that these Amendments do not enumerate this right with full certainty and specificity.
Federal legal precedent additionally supports the right to privacy. In the context of case precedent, the right to privacy in America has historically been tied to healthcare, bodily autonomy, gender, and sexuality. Some specific federal cases that highlight this include Griswold v. Connecticut [5] and Lawrence v. Texas, [6] which relate to a couple’s right to contraceptive purchases and private consensual homosexual intercourse, respectively. Doctor-patient privilege is additionally protected by the Health Insurance Portability and Accountability Act (HIPAA), a federal law that protects patient’s sensitive physical and mental health information [7]. However, these various laws and judicial cases are not as all-encompassing as a Constitutional amendment or similar federal legislation and do not protect Americans’ right to privacy with respect to the digital sphere. Importantly, by protecting privacy exclusively in terms of bodily autonomy or healthcare, they also fail to protect the dual nature of likeness and its importance to digital privacy law. Because the right to privacy has never been codified verbatim as an American right under federal law, many legal scholars have argued that these Amendments, laws, and case decisions offer this right insufficient support. One such argument is enunciated in the seminal essay “The Right to Privacy” by Samuel Warren and Louis Brandeis, published in the Harvard Law Review in 1890 [8]. In one of the most cited legal essays, Warren and Brandeis analyze existing legislation surrounding slander and libel, torts, copyright, and private property and raise concerns about the rise of photography and other technologies that infringe on domestic life [9]. They define domestic life as that which is discussed or done in the privacy of one’s home, and their essay frames the right to privacy as the right to be let alone as well as the right to avoid injury [10].
However, Brandeis and Warren fail to consider the relationship between a person’s marginalized identities in their understanding of sensitive information. Scholar Eden Osucha argues that the private versus public distinction is inherently racialized and gendered [11]. She notes that the various identity categories of race, gender, sexuality, and religion may place marginalized groups at a disadvantage in terms of their relative accessibility to privacy rights. Brandeis and Warren, on the other hand, focus their conversation on white elites alone when in reality, privacy rights have been inextricably tied to racial inequality in America. Systemic racism in various economic, judicial, and social institutions has been well documented by scholars such as Kimberlé Crenshaw, Robert Bullard, Audre Lorde, and others. [12] Considering the pervasive nature of institutional racism and Osucha’s assertion that privacy is inherently racialized, legal discourse about privacy rights in America would benefit from keeping in mind its relationship to race [13]. As such, any future legal discourse surrounding the right to privacy, particularly as it relates to likeness, should be centered around human rights and equality.
Modern Privacy Concerns and AI: The Duality of “Likeness”
As the world enters an era of mass information and developing technologies, the digital aspects of privacy become increasingly pertinent. As artificial intelligence develops and citizens’ sensitive digital data lies at risk to this new technology, it is important to highlight the duality of the term likeness, which includes both a person’s image and voice. Some citizens rightfully question whether policymakers should regulate new technologies that, in the hands of bad actors, could potentially jeopardize their right to privacy as it relates to likeness. Emerging legal discourse on AI and privacy rights should shift its focus to acknowledge the duality of likenesses rather than only speaking to likeness-as-image or likeness-as-voice. This duality helps to recognize that “bad actors” could encompass many different types of offenders from identity thieves to sexual criminals to Hollywood elites.
AI-driven voice technology, for example, can now significantly aid criminals in committing financial fraud as it can both introduce new methods of fraud and accelerate existing methods of fraud, putting citizens’ sensitive financial information at high risk. As such, an emphasis on the aural dimension of likeness and privacy is increasingly pertinent to the current discourse surrounding financial fraud. From exploitative facial and voice recognition systems that exacerbate imposter scams to generative AI chatbots that intensify phishing scams, AI poses a real risk to consumers, as noted by Michael Atleson from the Federal Trade Commission [14]. The Federal Trade Commission further warns consumers that “a scammer could use AI to clone the voice of your loved one,” and that they may use it to steal money by asking for the person on the other line to “wire money, send cryptocurrency, or buy gift cards and give them the card numbers and PINs” [15]. In the age of AI, what qualifies as sensitive information now includes a person’s voice and image; as such, this information should have strong federal protection and legal grounds for punitive action if appropriated. Further regulating the development of generative AI may mitigate the potential for bad actors to use the technology to commit financial fraud. At the very least, greater emphasis on the audial nature of likeness in data privacy discourse would increase awareness amongst citizens about the potential for AI to be used to commit fraud.
Another area of serious concern for digital privacy rights is the growth of deep fakes. Deep fakes are hyper-realistic videos and sounds of an event that never existed, artificially depicting a person while leaving little trace of manipulation [16]. In a recent scandal on the video-streaming platform Twitch, individuals used this technology to target women. Media star Blaire, known online as QTCinderella, was one such target; her likeness was used to create deep fake pornographic material without her consent, and she was unable to receive just compensation for the emotional and psychological effects this content had on her. Unfortunately, there was a lack of a strong regulatory framework to protect her from this sexual cybercrime [17]. Her story serves as an example to lawmakers of the potential for AI to cause emotional distress in victims of sexual violence. Only fifteen U.S. states have enacted laws or have proposed laws regulating deep fakes [18]. Instead of digital privacy protection that varies from state to state, deep fake regulation must exist on the federal level to provide more protection and support for potential future victims of sexual cybercriminals. According to Hany Farid, a Professor of Computer Science at U.C. Berkeley, the standard for the burden of proof for such cases is determined by the specific platforms or applications in which the content was circulated in [19]. Future federal regulations should emphasize the importance of corporate responsibility in the circulation of deep fake content to shift the burden of proof off of victims.
The rise of these non-consensual sexual cyber crimes as AI becomes more ubiquitous, especially in the context of individuals with malicious intents, is further concerning. As scholars Yar and Drew highlight in an Australian and United Kingdom-based study, visual non-consensual pornographic sexual violence has been a growing concern globally [20]. As early as 2010, so-called “revenge porn” sites had been operating with upwards of 35,000 submissions every week [21]. In Australia, from 2014 to 2017, nationwide surveys indicated that the number of individuals who experienced image-based abuse rose by 10 percent [22]. Further, the threat of image-based abuse today grows as AI and deepfake technologies become more widespread and user-friendly. This opens the door for a larger population of perpetrators to use this technology maliciously—they now need only access to the internet and a motive to produce deep fakes. The inclusion of likeness within the framework of one’s sensitive information in potential future privacy legislation would provide a legal basis for victims in the future to seek compensation for the emotional and psychological harm endured from cybercrimes. Moreover, visual likenesses can be appropriated in multiple ways. More than causing emotional damage as a result of sexual cybercrimes, the growth of AI for some in the entertainment industry has meant insufficient financial compensation for the use of their image or voice.
In the entertainment industry, concerns regarding AI and likeness, as well as calls for stronger federal laws that regulate and protect a person’s image and voice, have become especially pertinent recently. The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) represents thousands of artists in different sectors of the entertainment and media industries, and it seeks to protect the rights of creatives who face potential privacy violations in the face of developing AI [23]. They released a statement on AI and digital doubles in media which argued that “Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creative works, or professional voices and likenesses, without permission or compensation” [24]. One such example of this exploitation concerns background actors. A union negotiator with SAG-AFTRA claimed that studios may only compensate background actors for only one day’s worth of pay after being digitally scanned and that their digital likeness could be reproduced with the assistance of artificial intelligence and used by those studios forever [25]. Producers and union officials disagree over the meaning of consent within the context of paid usage of actors’ likenesses [26]. Another example of generative AI’s potential to appropriate likeness can be seen in a recent dispute between Scarlett Johansson and OpenAI. A voice that sounded eerily similar to her own was used in a recently released voice AI product [27]. Shortly after Johansson released a statement delineating her complaints, the company withdrew its product [28]. It is reasonable to question whether an actor without prestige and recognition would receive as much thoughtfulness from OpenAI as Johansson did. Without federal enforcement of the right to digital privacy and the right to govern the usage of one’s own likeness, actors, particularly background actors, may be financially exploited by studios and producers in the coming years.
The Challenges of Regulating Artificial Intelligence
As new regulations pertaining to AI continue to be introduced, lawmakers must balance the benefits and downsides. As AI includes a broad range of intelligent technology, from complex algorithmic systems to the “Top Picks” section of Netflix, it can be challenging to regulate such a broad entity. As such, it is impossible to approach AI with one-size-fits-all legislation. Due to its complexity, regulating AI will likely grow to be one of the paramount legal and political challenges of the 21st Century.
Proponents of the technology’s benefits underscore the financial incentives to utilize AI, as it provides business owners with opportunities to cut costs. Many foresee a future in which developing technology does not completely take over the job market but instead provides ways for businesses to perform tasks with fewer employees. Yet others, such as billionaire Elon Musk, believe that “there will come a point where no job is needed” [29]. Because there is a lack of adequate federal legislation regulating the use and development of AI, it exists in a relatively free system. To ensure equality for all Americans and a sustainable job market in the long term, legislators must take on the challenge of regulating the growing force that is AI. Although major CEOs and corporations will fight hard against AI regulation, politicians and lawyers should realize that such regulations benefit all citizens.
Concluding Remarks
The lack of concrete language protecting a right to privacy on the federal level threatens the safety and justice of American citizens in today’s digital world. The unregulated development of artificial intelligence raises significant legal concerns. Stronger privacy rights, particularly those concerning the dual audial and visual nature of likeness, will help ensure the welfare of all Americans. The modern discourse surrounding privacy law should emphasize this dual nature of likeness and confront the problematic history of privacy rights in America when considering future federal digital privacy regulations. Major players like OpenAI, Google, Microsoft, and others currently battle for a stronghold in the AI market [30]. Politicians and legislators should enact stronger, more comprehensive federal support for the right to privacy, which would ensure that legislators value the safety of their constituents over the profit of corporations. These measures would aid in ensuring a just and secure future for all Americans.
Citations:
[1] Alvaro Puig, “Scammers use AI to enhance their family emergency schemes,” Federal Trade Commission Consumer Advice, March 20, 2023, https://consumer.ftc.gov/consumer-alerts/2023/03/scammers-use-ai-enhance-their-family-emergency-schemes.
[2] Cayce Myers, “Warren, Samuel & Louis Brandeis. The Right To Privacy,” Communication Law And Policy 25, no. 4 (2020): 519, https://doi.org/10.1080/10811680.2020.1805984.
[3] Scott Dodson, “A Darwinist View of the Living Constitution,” Vanderbilt Law Review 61, no. 5 (2008): 1317-1347, https://www.proquest.com/scholarly-journals/darwinist-view-living-constitution/docview/198929890/se-2.
[4] U.S. Const. amend. IV.; Orin S. Kerr, “The effect of legislation on Fourth Amendment protection,” Michigan Law Review 115, no. 7 (2017): 1118, https://link.gale.com/apps/doc/A494743007/AONE?u=tall85761&sid=bookmark-AONE&xid=545a4fb7.
[5] Griswold v. CT, No. 496, 479 (U.S. 1965).
[6] Lawrence v. TX, No. 02-102, 558 (U.S. 2003).
[7] United States Dept. of Health & Human Services, “Summary of the HIPPA Privacy Rule,” H.H.S, (2003): 5, https://www.hhs.gov/sites/default/files/privacysummary.pdf
[8] Samuel D. Warren and Louis D. Brandeis, “The Right to Privacy,” Harvard Law Review 4, no. 5 (1890).
[9] Ibid.
[10] Ibid.
[11] Eden Osucha, “The Whiteness of Privacy: Race, Media, Law,” Camera Obscura, 24, no. 1 (2009): 67-107, https://doi.org/10.1215/02705346-2008-015.
[12] Kimberlé Crenshaw, Transformation and Legitimation in Antidiscrimination Law, 1331–1387 Harv. L. Rev., 101, 7 (1988); Robert Bullard & Beverly Wright, The Wrong Complexion for Protection (2012); Audre Lorde, Speaker, N.Y.U. Institute for the Humanities: The Master's Tools Will Never Dismantle the Master's House (1984).
[13] Eden Osucha, “The Whiteness of Privacy: Race, Media, Law,” Camera Obscura, 24, no. 1 (2009): 73, https://doi.org/10.1215/02705346-2008-015.
[14] Michael Atleson, “Chatbots, deepfakes, and voice clones: AI deception for sale,” FTC Business Blog, March 20, 2023, https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale.
[15] Alvaro Puig, “Scammers use AI to enhance their family emergency schemes,” Federal Trade Commission Consumer Advice, March 20, 2023, https://consumer.ftc.gov/consumer-alerts/2023/03/scammers-use-ai-enhance-their-family-emergency-schemes.
[16] Meredith Somers, “Deepfakes, explained,” MIT Sloan, July 21, 2020, https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained.
[17] Bianca Britton, “They appeared in deepfake porn videos without their consent. Few laws protect them,” NBC News, February 14, 2023, https://www.nbcnews.com/tech/internet/deepfake-twitch-porn-atrioc-qtcinderella-maya-higa-pokimane-rcna69372.
[18] Isaiah Poritz, “States Are Rushing to Regulate Deepfakes as AI Goes Mainstream,” Bloomberg, June 20, 2023, https://www.bloomberg.com/news/articles/2023-06-20/deepfake-porn-political-ads-push-states-to-curb-rampant-ai-use?leadSource=uverify%20wall.
[19] Bianca Britton, “They appeared in deepfake porn videos without their consent. Few laws protect them.” NBC News, February 14, 2023, https://www.nbcnews.com/tech/internet/deepfake-twitch-porn-atrioc-qtcinderella-maya-higa-pokimane-rcna69372.
[20] Jaqueline Drew and Majid Yar, “Image-Based Abuse, Non-Consensual Pornography, Revenge Porn: A Study of Criminalization and Crime Prevention in Australia and England & Wales,” International Journal of Cyber Criminology 13, no. 2 (July 2019): 578–594, https://www.proquest.com/scholarly-journals/image-based-abuse-non-consensual-pornography/docview/2384584389/se-2?accountid=4840.
[21] Ibid.
[22] Ibid.
[23] “SAG-AFTRA Statement on the Use of Artificial Intelligence and Digital Doubles in Media and Entertainment,” News Updates, SAG-AFTRA, March 17, 2023, https://www.sagaftra.org/sag-aftra-statement-use-artificial-intelligence-and-digital-doubles-media-and-entertainment.
[24] Ibid.
[25] Bobby Allyn, “Movie extras worry they'll be replaced by AI. Hollywood is already doing body scans,” National Public Radio, August 2, 2023, https://www.npr.org/2023/08/02/1190605685/movie-extras-worry-theyll-be-replaced-by-ai-hollywood-is-already-doing-body-scan#:~:text=A%20union%20negotiator%20has%20claimed,for%20the%20rest%20of%20eternity.%22.
[26] Ibid.
[27] Megan Cerullo, Scarlett Johansson was "shocked, angered" by OpenAI's ChatGPT voice that sounds like her, CBS News, https://www.cbsnews.com/news/openai-chatgpt-scarlett-johansson-ai-voice/ (May 21, 2024).
[28] Ibid.
[29] Ryan Browne, “Elon Musk says AI will eventually create a situation where ‘no job is needed,’” CNBC, November 2, 2023, https://www.cnbc.com/2023/11/02/tesla-boss-elon-musk-says-ai-will-create-situation-where-no-job-is-needed.html.
[30] Nico Grant and Karen Weise, “In A.I. Race, Microsoft and Google Choose Speed Over Caution,” New York Times, published April 7, 2023, revised April 10, 2023, https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html.