The Problem of Deepfake Pornography

In the past several years, the use of AI has increased exponentially. This comes with many benefits — including improved healthcare, access to education, and time-saving — but it brings equally as many challenges. As AI is a new phenomenon, its legality in certain spheres has not been thoroughly analyzed nor legislated. One of the most recent forms of dangerous AI content has come in the form of sexual harassment through AI deepfakes, and it only seems to be getting worse.

The rise in deepfake pornography is not only a privacy issue, as, according to a 2019 study, a staggering 96% of all deepfake videos were nonconsensual pornography, but it is also a reminder of the deep-seated misogyny within our culture. 99% of the individuals targeted in deepfake pornography are women. This means that women are much more likely to be the targets of these attacks, especially those in the entertainment industry and those in political power. This disproportionate targeting falls in line with misogynistic trends of undermining women's achievements through hypersexualization and dehumanization. Unfortunately, there are few avenues for women targeted through this type of media to find justice. It is up to those targeted to find the individual who is creating and disseminating these “cheap-fakes,” and even if the perpetrator is found, legal recourse is essentially only possible through civil court, meaning both parties would have to be living in the same state. This means that if the creator resides in another jurisdiction or country, it is nearly impossible to pursue legal action.

Although much of AI content is new, we can only make so many excuses for our government’s lax regulation. In fact, United States law makes it extremely difficult for victims to find legal recourse if they are targeted. As Section 320 of the Communications Decency Act states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This creates a convenient carve-out for media distribution platforms, one that allows them to remain not liable for illegal online content that is posted. When suing online distributors is legally impossible, there is very little a victim can do to defend themselves.

Currently, there are only state-level laws regulating the creation and distribution of deepfake pornography, such as those in Virginia and California. Virginia was the first state in the United States to criminalize the distribution of nonconsensual deepfake pornography, classifying the distribution of nonconsensual “falsely created” explicit images and videos a Class 1 misdemeanor, punishable by up to a year in jail and a fine of $2,500. Likewise, California has enacted laws that allow victims of non-consensual deepfake pornography to sue for damages. At a national level, former President Trump signed the first federal law pertaining to "deepfakes'' on December 20, 2019. The $738 billion defense policy package, known as the National Defense Authorization Act for Fiscal Year 2020 (NDAA), contains a deepfake provision, however, it does nothing to regulate nonconsensual pornography, focusing mostly on election-time disinformation. Further, President Biden's reauthorization of the Violence Against Women Act (VAWA) established a federal civil cause of action for individuals whose intimate visual images are disclosed without their consent, allowing a victim to recover damages and legal fees and created a new National Resource Center on Cyber Crimes Against Individuals. However, this also does nothing to protect against deepfake pornography.

The fast-paced nature of the development of AI causes an ever-changing online landscape, rife with new problems and dangers. It is the government's job to respond to these. The total number of deepfake videos online in 2023 is 95,820, representing a 550% increase over 2019. This is a large social problem causing deep harm to all those affected. A drastic rise in the dissemination of this type of media paired with lax federal regulation is a dangerous combination. There have been some recent federal regulations attempted on the subject, such as H.R. 5586, the DEEPFAKES Accountability Act, a bill proposed to Congress by Congresswoman Yvette Clarke. While no change has been made as of yet, this movement does prove that some lawmakers care. The bottom line is that non-consensual deepfake pornography is a serious issue, affecting thousands of people, and this issue must be remedied. The federal government must be held accountable if we are to ensure accessible, quick, and comprehensive justice.

Bella Evans

Bella Evans is a staff writer for the Harvard Undergraduate Law Review.

Previous
Previous

Another One Bites the Dust: SBF's Crumbling Crypto Empire

Next
Next

The Life, Death, and Legacy of Sandra Day O’Connor