Code vs. Conscience: The Perils of AI’s Legal Hallucinations and Amorality

AI undoubtedly offers the legal profession a range of improvements in efficiency that are important to the pursuit of justice. The ability of AI to process data and answer specific and unique questions is an incredible asset. This offers a broader array of litigants a crucial research tool, increasing access to the often expensive and exclusionary legal system, and time-saving opportunities for legal practitioners, particularly in areas such as legal aid where resourcing is an issue. Similarly, AI can be used by stenographers, generating transcripts almost instantaneously, while translation is a significant benefit for clients who face language and other communication barriers. However, the efficiency gains of AI, applicable in the everyday lives of students and workers, should not be conflated with reliability and this is particularly true in the case of the law. The notorious capability of ChatGPT to answer our many questions may be convenient, but AI can “hallucinate,” generating convincing yet false answers based on fabricated material. Furthermore, the tendency of AI to perpetuate discrimination, and its inability to make moral judgements potentially damages the legitimacy of legal outcomes and faith in the justice system.

A recent case that exemplifies the problem of fabrications produced by AI hallucinations was the Mata v. Avianca (2023) case in the United States District Court for the Southern District of New York in which Steven A. Schwartz, a lawyer with more than 30 years of experience, unintentionally submitted to the court opinions, quotes and citations that did not exist after using ChatGPT to assist in his preparations. Not realising the tendency of AI to hallucinate and that the information it produced “could be fabricated,” Schwartz then “continued to stand by the fake opinions after judicial orders called their existence into question.” His case was consequently dismissed, wasting the time of all parties involved within the trial. Additionally, he and his law firm were publicly shamed and fined US$5000.

Unfortunately, it appears that such misuses and misunderstandings of AI in the law are far from unique. In a parallel case, President Donald Trump's ex-lawyer Michael Cohen, was found to have used an AI system called ‘Google Bard’, which “hallucinated” fake legal citations that passed into official court files and undermined legal certainty. Although AI offers legal practitioners potential efficiency in reading, processing large amounts of information and providing research assistance, it is important that we do not confuse artificial intelligence with human intelligence and be alert to its tendency to confidently answer queries, even when the facts are fabricated. All legal practitioners must therefore be alert to the potential damages of AI that may impact the legitimacy of cases, even if involuntary, promoting “cynicism about the legal profession and the American judicial system.”

However, beyond the issue of empirical reliability, the use of AI poses a number of more philosophical and ethical problems for the legal profession. Law is established by the government to regulate the way society behaves. Hence, law is written based on moral principles that, in most cases, will ensure the proper function of a people. Throughout trials, the outcomes are not only based on logic, but the ethics behind human philosophy. As such, there are two major ethical issues that can be identified with the use of AI in the legal system.

First, there is a tendency for bias and inequitable decisions, as AI is trained to follow algorithmic data. The problem is that “predictive analytics can be discriminatory.” If AI collects historical documents from reoffenders, this will create unfair biases through statistical models and predictive analytics. These can instigate systematic racial injustices towards certain communities. For example, a study from three different universities analysed 27,357 drug offences in Virginia between 2013-2022 where judges used AI to determine criminal recidivism risk scores ranging from 1-44. There was a clear discrepancy in the favour of White offenders over Black offenders, where Black offenders “suffer from a lower probability of alternative punishment, a higher probability of incarceration, and a longer imprisonment time of 5.4%, 4.0%, and 18.0%, respectively.”

The Colorado Artificial Intelligence Act (CAIA) , which will be imposed on February 1st 2026, will therefore require developers of “high-risk” artificial intelligence systems to use reasonable care to protect consumers from foreseeable risks of “algorithmic discrimination”. This discrimination is characterised by any difference in treatment or a consequence that is caused by a bias towards or against communities of certain ethnicity, gender, disability, age etc. Legal practitioners must be aware of the ethical issues involved in AI’s deployment, as any violations of this Act will deem a legal trial invalid.

The second issue is AI’s inability to make moral decisions. Although AI is able to accumulate data across the internet, AI has no direct emotional, or moral reasoning, and is therefore incapable of understanding these abstract concepts. Law should not be based on only logical reasoning, but intention, context and social norms that would deem an action right or wrong. Let us run through a scenario. If a man was on trial for manslaughter in a car accident, AI has no moral conception of murder without malice, and thus, would base their decision on the fact alone that death was caused. Without considering the possibility of an accident occurring, the defendant could be punished with the same repercussions as a homicide case. This approach of putting the life of an individual in AI’s jurisdiction is an incredibly dangerous situation, as no justice could truly be established.

The proliferation of AI in our everyday lives will undoubtedly revolutionise the legal profession, and this offers legal professionals and the general public an important and useful tool. However, it remains essential that the dangers of this technology are recognised: the risks of AI ‘hallucination’, fabrication, bias, and amorality are serious, and we must not confuse artificial intelligence with the humane and moral reasoning that underwrites our commitment to justice.

Bibliography

Clio. n.d. “AI and Law: What Are the Ethical Considerations?” Clio. https://www.clio.com/resources/ai-for-lawyers/ethics-ai-law/. [Accessed 4. Dec 2024].

Engstrom, Nora Freeman, David Freeman Engstrom. 2024. “Justice for All? Why We Have an Access to Justice Gap in America—and What Can We Do about It? | Stanford Law School.” Stanford Law School. June 13, 2024. https://law.stanford.edu/2024/06/13/justice-for-all-why-we-have-an-access-to-justice-gap-in-america-and-what-can-we-do-about-it/. [Accessed 1 Dec. 2024].

Ho, Yi-Jen (Ian), Wael Jabr, and Yifan Zhang. 2023. “Ai Enforcement: Examining the Impact of Ai on Judicial Fairness and Public Safety.” Social Science Research Network. Rochester, NY. August 6, 2023. https://doi.org/10.2139/ssrn.4533047. [28 Nov. 2024].

Legg, Michael and Vicki McNamara. 2024. “AI Is Creating Fake Legal Cases and Making Its Way into Real Courtrooms, with Disastrous Results.” UNSW Sites. March 13, 2024. https://www.unsw.edu.au/newsroom/news/2024/03/AI-creating-fake-legal-cases-disastrous-results. [Accessed 30 Nov. 2024].

Levi, Stuart D., Ken D. Kumayama William E. Ridgway Mana Ghaemmaghami MacKinzie M. Neal, “Colorado’s Landmark AI Act: What Companies Need to Know.” 2024. Skadden.com. Skadden, Arps, Slate, Meagher & Flom LLP. June 24, 2024. https://www.skadden.com/insights/publications/2024/06/colorados-landmark-ai-act. [Accessed 29 Nov. 2024].

“Mata v. Avianca, Inc., No. 1:2022cv01461 - Document 54 (S.D.N.Y. 2023).” n.d. Justia Law. https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54/ [Accessed 1 Dec. 2024].

Murray, Michael. 2024. “The Rise of AI in Court Reporting Technology.” Legal Tech News. Legaltech News. 2024. https://law.com/legaltechnews/2024/01/12/the-rise-of-ai-in-court-reporting-technology/?slreturn=2024120931717. [Accessed 1 Dec. 2024].

Pazzanese, Christina. 2020. “Great Promise but Potential for Peril.” Harvard Gazette. Harvard University. October 26, 2020. https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/ [Accessed 29 Nov. 2024].

Raymond, Nate . 2023. “Ex-Trump Fixer Michael Cohen Says AI Created Fake Cases in Court Filing.” Reuters, December 29, 2023. https://www.reuters.com/legal/ex-trump-fixer-michael-cohen-says-ai-created-fake-cases-court-filing-2023-12-29/. [Accessed 3 Dec. 2024].

Sonday, Kristen. 2024. “AI for Legal Aid: How to Empower Clients in Need - Thomson Reuters Institute.” Thomson Reuters Institute. October 15, 2024. https://www.thomsonreuters.com/en-us/posts/legal/ai-for-legal-aid-empowering-clients/. [Accessed 4 Dec. 2024].

Wacks, Raymond. 2008. “3. Law and Morality,” in Law: A Very Short Introduction. Oxford University Press, 67–84. https://academic.oup.com/book/818/chapter-abstract/135441901?redirectedFrom=fulltext. [Accessed 4 Dec. 2024].

Previous
Previous

The Perils of Artificial Intelligence in Law