AI Laws in the US and EU: Achieving a “Blanket Framework”
Introduction
A recording of your voice is being played among your coworkers. In it, you continuously use offensive and discriminatory language. While the judgment or sudden job loss that ensues is difficult to cope with, the fact that you can’t even recall ever having such a conversation is much more baffling. To dystopian enthusiasts, this may appear to be the synopsis of a Black Mirror episode, a popularized Netflix series that explores the dangers of technological reliance and development. Yet, Artificial Intelligence (AI) has made this a real-life occurrence. Individuals not only in the US but across the world are using AI to create deepfakes, video, photo, or audio that is manipulated, as well as other forms of inappropriate content that directly threaten people’s privacy, well-being, and public perception [1]. What sets this behavior apart from crimes of defamation, harassment, forgery, and stalking is the legal protections that have been implemented for victims.
Both in the European Union (EU) and the United States (US), AI laws remain a work in progress, with neither region being far from sufficiently updating its legal systems to accommodate these powerful shifts in technology. Although it would be a mischaracterization to state that no reforms have been implemented, there remain gaps in their respective systems. The EU approaches AI law through a deterrence framework that punishes dangerous use of AI through fines, close monitoring of AI systems, and extensive testing [2]. The US primarily stretches the protections of existing legislation onto AI-related crimes that would fall into such categories. With this in mind, however, the US’ framework relies primarily on these laws that have yet to be properly amended to account for AI development and federalism, with states passing their own “corrective” laws against AI-related crimes [3]. By combining the strongest aspects of each of their AI legal structures, we can achieve legislation with broad coverage and multifaceted enforcement mechanisms. This will provide powerful legal protection to vulnerable individuals and may even help standardize how nations navigate AI law or AI-related law enforcement for their citizens.
US Regulatory Policy on AI
The US’s current legal system can be described as following a “patchwork” approach towards AI. Lacking a strong national AI policy, or a “law of the land” that encompasses the technology’s legal limitations, AI policy has been highly dependent on executive orders, state laws, and existing policies that fail to adequately deter AI crimes. The “Take it Down Act” remains its only national, AI-related law that added new criminal prohibitions to the publication of intimate images as well as new requirements for platforms [4]. Some examples of state laws include Tennessee’s 2024 “Ensuring Likeness, Voice and Image Security Act” (ELVIS), which expanded restrictions on unauthorized commercial use of a person's name, image, and voice [5]. Similarly, New Jersey passed a law in 2025 that criminalizes creating or knowingly disclosing a piece of audio or an image that is deceptive for the purpose of furthering the commission of a crime [6]. From these two pieces of legislation alone, we can sense a trend within states’ legislatures to expand the legal boundaries of intellectual property use, in this case with relation to individuals' characteristics. In the era of AI, it has in fact been these innately human qualities which have been susceptible to extensive AI manipulation and abuse — hence the deepfake example in the introduction of this piece. As of 2025, all 50 states, as well as Puerto Rico, the Virgin Islands, and Washington, DC introduced legislation on the topic of AI in 2025 [7]. In total, around 100 measures were enacted across 38 states in the same year [8]. Although the creation of these policies across the country reflects a positive step towards a legal system that is better equipped to respond to AI abuses, it is the fact that these are exclusively adhered to by certain states that weakens its more widespread impact. Thus, residents of each state experience their own unique, strong or weak, broad or specific, many or few, AI-related legal protections. In the context of intellectual property rights or the usage of individuals’ characteristics without consent, it becomes especially important that states that have passed effective legislation can serve as “models” for other states to follow suit. Of course, the very problem of federalism and AI regulation is that without a national system in place that enforces this cohesion, it may not be fostered on its own.
At the Federal level, members of Congress have grappled with hundreds of different ideas for AI bills, but have only enacted fewer than 30 as of May 2025 [9]. Of those that have passed, the National Artificial Intelligence Initiative Act codified the American AI Initiative, created the National Artificial Intelligence Initiative, an interagency in the Office of Science and Technology Policy, and created the National AI Advisory Committee, which drafts reports as well as recommendations for the current presidential administration [10]. The AI in Government Act of 2020 and Advancing American AI Act are also worth noting [11]. The Take It Down Act enforced criminal penalties for the publication of “digital forgeries” like deepfakes, with a particular focus on protecting children [12]. This law, however, has been unable to significantly alleviate the bias issue in relation to housing algorithms or the credit system, for example [13]. Executively, both former President Biden and President Trump (during his first term) took actions in relation to AI, specifically focusing on AI innovation while managing risks. Trump’s Executive Orders, like “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” have focused heavily on advancing innovation efforts in both the private and public sectors [14]. Due to the non-permanent nature of these executive actions, modifications can occur, and agencies in government may have engaged in activities that the current administration no longer supports.
Once again, this poses a barrier to the consistency of AI laws and how not only private sector entities but also governmental agencies engage with the technology in ways that relate to Americans' privacy and data. It’s important to note that, while regulations/laws like those of Title VII of the Civil Rights Act and Americans with Disabilities Act, and Fair Housing Act remain in place, there may be limits to the legal defense they can provide in the face of this technology. Jo Gasior-Kavishe from the ACLU states that, “In many cases, individuals may not even know AI was used, deployers may not be aware of its discriminatory impact, and developers may not have tested the AI model for discriminatory harms” [15]. Although one may see the usefulness in “repackaging” existing legislation like HIPAA or the Equality of Opportunity Act that allow consumers to receive fair as well as private services [16], these may be easily bypassed by AI systems that had not been accounted for when these were initially drafted. The current US legal system has not experienced the necessary reforms to protect individuals' rights from possible Artificial Intelligence threats. Federalism, in the case of AI law, functions effectively only when a robust national policy has already been implemented. An absence of such, which the US is currently facing, weakens the legal security of Americans by providing only the shield of state-bound policies, temporary orders, and laws that have yet to address AI.
EU Regulatory Policy on AI
The EU, relative to the United States, has taken a broader approach to AI regulations. While some European countries have implemented their own laws in relation to deepfakes or digital identity, there is a unifying framework in place that serves as a benchmark for these member states to “build off of” when drafting their own laws. The AI Act, as aforementioned, is the first legal framework on AI [17]. This Act classifies AI according to risk levels: unacceptable risk is prohibited, high risk systems are regulated, limited risk AI systems are subject to lighter transparency obligations, and minimal risk ones are unregulated [18]. Developers of high-risk AI systems who intend to place the system on the market or in the EU, regardless of where they are based, are the ones who bear these obligations [19]. Specifically, high-risk AI system providers must establish risk management systems, conduct data governance, draw up technical documentation, and have human oversight, among other rules [20]. It also makes it necessary that material modified with AI is labeled as such [21].
Like in the US, where states implemented their own AI-related laws, some nations in the EU proceeded to create their own AI Acts in the aftermath of this landmark intervention. Italy, for example, became the first country, in September of 2025, to approve a law regulating the use of AI by imposing prison terms on those who use the technology to “cause harm, such as generating deepfakes, and limiting child access” [22]. It is enforced by the Agency for Digital Italy and the National Cybersecurity Agency [23]. While its intention to promote human-centric, transparent AI use and concurrent innovation parallels the US’s policy objectives, the legal approach to this common end is very disparate. Italian residents are already legally protected by the EU’s clearly outlined limitations for AI developers. Italy’s specific law only amplifies these existing safeguards. Denmark, too, is attempting to propose new deepfake legislation as an extension of its digital copyright law [24]. It would be an amendment that allows individuals affected by deepfakes to request their removal and artists can demand compensation for unauthorized use of their image, a right they hold for 50 years after their death [25]. If passed, this policy would lay the legal foundation for seeking damages under Danish Law [26].
Next Steps
Undeniably, there are a myriad of improvements that could be made to the way these legislative bodies impose legal safeguards against AI violations and unlawfulness. By scrutinizing the limitations and merits of these existing frameworks, we can set an expectation for what an ideal, AI-adapted legal system would look like, and hopefully, work towards achieving it. The EU’s risk model, for example, is rich with specificity, oversight, and regulatory controls. It clearly outlines what kinds of AI models are allowed to enter the market and how a company with a higher risk level can go about eventually realizing their model or not if they fail to obtain all the required authorizations. It tries in this way to minimize the risk of powerful AI tools being inappropriately utilized by creating a “screening process” that filters the models themselves, even before they reach the hands of consumers who may be harmed or cause harm through them. The US’ Take it Down Act, while national, mitigates these potential harms once they have already been inflicted. In essence, it has a fundamental weakness, and that is that it condemns the clearly visible acts of AI misuse but could easily also overlook those less apparent.
Tanner and Turner Lee condemn this very issue in their article, "Addressing overlooked AI harms beyond the TAKE IT DOWN Act,” stating that “AI models have often been described as 'opaque' or 'black boxes,' but that does not mean we can ignore risks that we cannot see” [27]. Thus, in this context, when we note that the US’ national policy was weakly kept up with AI advancements, we don’t mean that no national laws have been passed, only that they have been insufficient. In light of this, it is equally important to highlight that the EU’s framework has experienced its own challenges. The high documentation demands become costly and time-consuming for businesses, especially small ones, to abide by [28]. For example, a 25-person medical device manufacturer may face documentation costs equaling 15% of their R&D budget [29]. The disproportionate effect harms not only enterprises and startups, which can benefit the economy of the region where they are implemented, but also the very longevity of the law. Although the US may have focused too greatly on promoting innovations without equally compensating with stronger mechanisms of compliance, these existing business concerns expressed by those under EU AI demonstrate that overzealous regulations could be detrimental. To protect individuals, we must strike a balance between laws that set clear boundaries and procedures with those that, while in theory, are highly corrective, would be unfeasible to abide by. “Stricter” laws are not inherently negative, but when certain policies are too difficult to apply, they become impractical, and subsequently, weakly enforced.
In Pikesville High School, Principal Eric Eisner faced severe judgment, near job loss, and a tarnished reputation after an AI-generated recording of his voice was released [30]. In Almendralejo, a town in southern Spain, minor girls are being victimized by AI-generated nude pictures of themselves that are circulating on social media [31]. When one approaches the subject of AI policy and laws, it is easy to forget the magnitude of harm these systems can cause. Not to mention the fact that, like with any other crime, there are victims who will need to seek recourse through the legal system. Stagnant national laws would allow these illicit activities to proliferate, with the evolution of the technology outperforming the defensive measures in place. Undermining the effects AI can have on individuals by failing to restructure laws or create new ones in a way that accommodates potential inequality concerns, privacy infringements, and aggressions against human dignity will position individuals across the world as direct targets without legal refuge.
Notes
[1] Hadjimichael, M. (2025, October 14). EU and US AI Policies Head Their Own Way - Strategy International · Think Tank & Consulting Services. Strategy International · Think Tank & Consulting Services. https://strategyinternational.org/2025/10/14/publication208/
[2] GAO. (2024). Deepfakes.
[3] European Parliament. (2025, February 19). EU AI Act: First Regulation on Artificial Intelligence. European Parliament. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence; Regulating Artificial Intelligence: U.S. and International Approaches and Considerations for Congress. (2025). https://www.congress.gov/crs_external_products/R/PDF/R48555/R48555.2.pdf
[4] The TAKE IT DOWN Act: A Federal Law Prohibiting the Nonconsensual Publication of Intimate Images. (2025). Congress.gov. https://www.congress.gov/crs-product/LSB11314
[5] AI. (2025, November 5). Policy Backgrounder: Me, Myself, and IP: AI and the Deepfake Problem. The Conference Board. https://www.conference-board.org/research/ced-policy-backgrounders/me-myself-and-ip-ai-and-the-deepfake-problem
[6] Ibid
[7] National Conference of State Legislatures. (2025, March 22). Artificial Intelligence 2025 Legislation. Ncsl.org. https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation
[8] Ibid
[9] Ibid
[10] Ibid
[11] Ibid
[12] Tanner, B., Stewart, J., & Lee, N. T. (2025, December 11). Addressing overlooked AI harms beyond the TAKE IT DOWN Act. Brookings. https://www.brookings.edu/articles/addressing-overlooked-ai-harms-beyond-the-take-it-down-act
[13] Ibid
[14] Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government – The White House. (2020, December 3). Trumpwhitehouse.archives.gov. https://trumpwhitehouse.archives.gov/presidential-actions/executive-order-promoting-use-trustworthy-artificial-intelligence-federal-government/
[15] Gasior-Kavishe, J. (2025, December 3). AI is Infringing on Your Civil Rights. Here’s How We Can Stop That | ACLU. American Civil Liberties Union. https://www.aclu.org/news/racial-justice/ai-is-infringing-on-your-civil-rights-heres-how-we-can-stop-that
[16] Artificial Intelligence Regulations: State and Federal AI Laws 2026. (2026). Drata.com. https://drata.com/blog/artificial-intelligence-regulations-state-and-federal-ai-laws-2026
[17] AI. (2025, November 5). Policy Backgrounder: Me, Myself, and IP: AI and the Deepfake Problem. The Conference Board. https://www.conference-board.org/research/ced-policy-backgrounders/me-myself-and-ip-ai-and-the-deepfake-problem; European Commission. (2025, August 1). AI Act. European Commission. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
[18] EU AI Act. (2024, February 27). High-level Summary of the AI Act. EU Artificial Intelligence Act; Future of Life Institute. https://artificialintelligenceact.eu/high-level-summary/
[19] Ibid
[20] Ibid
[21] AI. (2025). Me, myself, and IP: AI and the deepfake problem. The Conference Board.
[22] Giuffrida, A. (2025, September 18). Italy first in EU to pass comprehensive law regulating use of AI. The Guardian; The Guardian. https://www.theguardian.com/world/2025/sep/18/italy-first-in-eu-to-pass-comprehensive-law-regulating-ai
[23] Ibid
[24] Willige, A. (2025, July 30). Deepfake legislation: Denmark takes action. World Economic Forum. https://www.weforum.org/stories/2025/07/deepfake-legislation-denmark-digital-id/
[25] Ibid
[26] Ibid
[27] Gasior-Kavishe, AI is Infringing on Your Civil Rights, ACLU, 2025.
[28] Aipolicybulletin.org. https://www.aipolicybulletin.org/articles/its-too-hard-for-small-and-medium-sized-businesses-to-comply-with-eu-ai-act-heres-what-to-do
[29] Ibid
[30] Lake, T. (2024, April 26). A school principal faced threats after being accused of offensive language on a recording. Now police say it was a deepfake. CNN. https://edition.cnn.com/2024/04/26/us/pikesville-principal-maryland-deepfake-cec
[31] Hedgecoe, G. (2023, September 23). AI-generated naked child images shock Spanish town of Almendralejo. BBC News. https://www.bbc.com/news/world-europe-66877718