Old Law, New Bias: Applying Civil Rights Doctrine to Algorithmic Discrimination
In our rapidly evolving society, where the bounds of technology continue to be stretched and extended, we have an increasingly significant role to play in ensuring that innovation does not compromise the rights and principles of each individual. In February 2023, an African American man over the age of 40 with depression and anxiety, Derek Mobley, filed a lawsuit against the human resources and financial management artificial intelligence platform, Workday, Inc., after being rejected from more than 100 jobs over the course of several years [1]. Mobley claimed that Workday’s applicant screening algorithm unlawfully rejected candidates based on characteristics such as race, age, and disability. More than two years later, a California federal judge gave approval for the case to continue, with Mobley expanding his age discrimination claim to a national action, allowing the case to take on millions of other plaintiffs over the age of 40. This case, Mobley v. Workday (2025), forces courts to decide how existing anti-discrimination laws apply to artificial intelligence (AI). Algorithms promise objectivity but often reproduce the same inequalities they are supposed to eliminate. Because no comprehensive “AI law” yet exists, judges must rely on long-standing civil rights principles to prevent algorithmic bias from becoming a new, invisible form of systemic discrimination.
At their core, algorithms are not mysterious. As one definition explains, “an algorithm is a process or set of rules to be followed in problem solving. It is a structured process. It proceeds in logical steps. This is the essence of processes programmed into computers” [2]. In theory, such logic should guarantee neutrality, an orderly sequence of decisions immune to human error or emotion. In practice, however, algorithms reflect the same biases found in the data and systems that created them. When trained on years of inequitable hiring or lending patterns, they tend to replicate those same outcomes [3]. In State v. Loomis (2016), the Wisconsin Supreme Court considered whether using the COMPAS risk assessment tool at sentencing violated a defendant’s due process rights. Independent studies have shown that COMPAS tends to overestimate the likelihood of reoffending for Black defendants and underestimate it for White defendants, resulting in higher average risk scores for Black individuals despite similar behavior [4]. Despite these concerns, the court upheld the use of COMPAS, reasoning that the sentencing judge did not rely solely on the algorithm and that its proprietary nature did not, by itself, violate due process. The Court wrote that “COMPAS risk scores cannot be used to determine whether an offender is incarcerated or set the severity of a sentence,” but then affirmed a sentence in which the risk score plainly influenced the judge’s decision [5]. Worse still, the opinion accepted proprietary secrecy as a justification for opacity, noting that the algorithm’s creator, Northpointe, had a “proprietary interest in keeping its COMPAS source code confidential.” In effect, the court elevated trade-secret protection over the defendant’s constitutional right to test the evidence used against him. That move embodies precisely the judicial reluctance this essay challenges: the tendency to defer to technological complexity instead of using established procedural tools, such as compelled disclosure, expert validation, or evidentiary hearings, to ensure fairness. Loomis did not merely defer to innovation; it distorted due process itself by treating awareness of bias as a substitute for fairness. In doing so, it created a blueprint for judicial inaction that undermines the very principles the Fourteenth Amendment exists to protect.
By contrast, in Houston Federation of Teachers v. Houston ISD (2017), a federal court found that teachers’ due-process rights were violated when the school district used a secretive algorithm to determine job performance and pay [6]. Because teachers could not access or challenge the data behind their scores, the court found the system unconstitutional. This case demonstrated the opposite instinct, insisting that technological opacity cannot override basic fairness. Together, Loomis and Houston reveal a judicial split: some courts defer to complexity, while others demand accountability and transparency.
The Mobley lawsuit sits squarely in the middle of that divide. Workday argued that it could not be sued because it was merely a software vendor, not an “employer” or an “employment agency.” Under federal law, an employment agency is any entity that recruits or refers job seekers for employers and can be held liable for discrimination. Mobley’s attorneys responded that Workday’s platform performs the core functions of such an agency: it screens applicants for multiple employers and determines who advances in the hiring process. In January 2024, Judge Rita Lin dismissed some of Mobley’s claims but signaled that the theory was plausible. By July, she ruled that Workday could indeed be treated as an employment agency or as an agent of its client companies, meaning a party acting on their behalf and therefore subject to liability under anti-discrimination laws. By spring 2025, she had conditionally certified a nationwide collective action under the Age Discrimination in Employment Act (ADEA), allowing other older job applicants to join [7]. Judge Lin held that “at this stage, the Court cannot conclude that Workday’s algorithmic screening tool is outside the scope of Title VII or the ADEA” and that “if the tool effectively acts as a gatekeeper to employment opportunities, it may be treated as the employer’s agent” [8]. She also rejected Workday’s argument that only the hiring employers could be liable, writing that “a vendor cannot insulate itself from anti-discrimination law by automating functions that human recruiters used to perform.” Judge Lin’s rulings reject the deferential logic of Loomis and demonstrate that ordinary legal doctrines, like agency and disparate impact, can reach algorithmic bias when courts are willing to use them. Mobley shows that the path forward does not require new law but judicial will. Judge Lin’s reasoning of treating Workday’s algorithm as an “agent” subject to liability should not be seen as bold, but rather as the baseline for every court confronting algorithmic discrimination.
Other early employment cases show both the promise and limits of this approach. In EEOC v. iTutorGroup (E.D.N.Y. 2023), the Equal Employment Opportunity Commission sued an online tutoring company whose AI system automatically rejected applicants over age sixty [9]. The company quickly settled, paying $365,000 and agreeing to end the practice, an easy victory precisely because the discrimination was overt and documented. By contrast, most AI bias is invisible. Applicants rarely know that a computer made the decision or that their rejection resulted from a pattern baked into the data. Without access to that information, many would-be plaintiffs cannot allege the “specific policy or practice” needed to survive early dismissal. In that sense, Mobley is exceptional not because bias there was unique, but because the plaintiff could identify and link it to a particular tool. The lesson is not that civil-rights law needs rewriting, but that judges must use the doctrines they already have to demand disclosure, justify business necessity, and test for less-discriminatory alternatives.
Although algorithmic discrimination may seem new, the tools to address it already exist within U.S. civil rights law. Courts have long recognized that facially neutral practices, those that do not mention race but still produce unequal outcomes, can still violate anti-discrimination statutes. Under Title VII of the Civil Rights Act of 1964, employment practices that have a disparate impact (a disproportionately negative effect on a protected group) are unlawful unless the employer can prove the practice is a business necessity, meaning it is essential to the job and there’s no less discriminatory alternative [10]. In Griggs v. Duke Power Co. (1971), the Supreme Court struck down an employer’s use of aptitude tests that disproportionately excluded Black applicants even though the tests were neutral on their face [11]. The same reasoning applies to modern hiring algorithms: if a program systematically screens out certain racial or age groups, it raises a Griggs-style disparate impact issue. Mobley shows that the same logic applies to modern hiring algorithms: if an AI system consistently disadvantages older or minority candidates, courts can analyze it using the Griggs framework.
The same logic applies beyond employment. In housing and lending, algorithms now play a major role in deciding who qualifies for a mortgage or a line of credit. Laws like the Fair Housing Act (FHA) and the Equal Credit Opportunity Act (ECOA) already prohibit practices that disproportionately disadvantage people based on race or other protected traits, even if the rules appear neutral [12, 13]. Yet when discrimination comes from an algorithm rather than a loan officer, courts and regulators have been slow to respond. Investigations by journalists and researchers have shown that automated mortgage systems often approve White applicants more frequently than equally qualified Black or Latino borrowers [14]. These outcomes mirror the same patterns that the FHA and ECOA were designed to prevent. In 2022, the Department of Justice sued Meta (Facebook) for using algorithms that allowed housing advertisers to exclude users by race and ZIP code; Meta settled and agreed to rebuild its system [15]. In 2024, the tenant-screening company SafeRent settled a class action for $2.2 million after its algorithm penalized renters who used housing vouchers [16]. Both cases ended in settlements rather than judicial rulings, leaving little precedent for future courts. And in 2019, New York regulators investigated the Apple Card for giving women lower credit limits than men [17]. The agency found the disparities credible but ultimately could not prove discrimination without access to Apple’s proprietary credit model. These cases illustrate both progress and the recurring obstacle: plaintiffs and regulators often lack the data needed to prove bias, even when outcomes appear skewed.
Despite these examples, litigation remains sparse. Most people affected by algorithmic bias never know it. Job applicants receive a rejection email, not a data trail; tenants see “application denied,” not a score. And judges, when cases do arise, often face motions to dismiss before any discovery can occur. Discovery, the stage where each side obtains evidence from the other, is critical to uncovering how an algorithm works, but courts are often reluctant to compel it, citing trade-secret protections or perceived technical complexity. The rarity of these cases is itself evidence of judicial failure. When courts accept trade-secret claims uncritically, they create a procedural dead end where discrimination can hide. The law cannot function if judges refuse to look behind the algorithm.
Moving forward does not require new legislation but renewed judicial confidence. Courts should analyze algorithmic systems using the same process they apply to any employment, housing, or lending policy. If plaintiffs can show that an algorithm produces statistically significant disparities, that should trigger a disparate-impact inquiry. Once that showing is made, the burden shifts to the defendant to prove a legitimate business necessity and to show that no fairer alternative exists. Courts can compel limited disclosure of algorithms under protective orders to balance fairness and trade secrecy. In criminal cases, due-process principles require that defendants have access to any algorithm used in sentencing or parole decisions. The success of cases like iTutorGroup and the ongoing progress of Mobley suggest that these tools can be used effectively; what’s missing is judicial willingness to deploy them consistently.
Opponents argue that extending civil rights statutes to AI systems exceeds judicial authority. They contend that Congress wrote laws like Title VII and the FHA to regulate human decision-making, not software, and that applying them to algorithms constitutes judicial legislation (judges effectively rewriting the law). Some courts have echoed this concern. In Loomis, the Wisconsin Supreme Court refused to require transparency for the COMPAS algorithm, reasoning that such mandates must come from legislators, not judges. Others warn that forcing companies to reveal proprietary algorithms could violate trade-secret protections or due-process rights that require fair notice of prohibited conduct. In their view, judicial restraint protects the separation of powers between courts and Congress.
Yet this interpretation ignores the flexibility Congress deliberately built into civil rights law. Statutes like Title VII, the ADEA, the FHA, and the ECOA prohibit discriminatory practices, not specific tools or technologies. Courts have long applied these laws to neutral mechanisms, from written exams to credit-scoring models, without waiting for new legislation. Treating algorithms differently would carve out an unjustified exemption for technology, shielding discrimination simply because it is automated. As Mobley shows, applying doctrines of agency and disparate impact to AI is not judicial activism, it is faithful application of established law to modern facts.
We still have time to act, but that window is closing. Cases like Loomis and the Apple Card investigation show how judicial deference to black-box technology can entrench inequality. Yet Mobley, Meta, SafeRent, and iTutorGroup demonstrate that the existing legal system can respond effectively when judges and regulators demand transparency and justification. Each generation of judges has faced a moment when new technology threatened to outpace old principles. Algorithmic bias presents the same kind of challenge: it hides inequity behind data and complexity rather than words or intent. Courts can let it calcify, or they can ensure that technological progress continues on the foundation of fairness and equality that civil rights law has always demanded.
Works Cited
[1] Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal. July 18, 2024). https://clearinghouse.net/case/44074/
[2] Sales, Philip. Algorithms, Artificial Intelligence, and the Law | Judicature. April 7, 2021. https://judicature.duke.edu/articles/algorithms-artificial-intelligence-and-the-law/.
[3] “What Is Algorithmic Bias? | IBM.” September 20, 2024. https://www.ibm.com/think/topics/algorithmic-bias.
[4] State v. Loomis, 881 N.W.2d 749 (Wis. 2016).
[5] 881 N.W.2d 749, 764, 769 (Wis. 2016) (Bradley, J.).
[6] Hous. Fed'n of Teachers v. Hous. Indep. Sch. Dist., 251 F.Supp.3d 1168 (S.D. Tex. 2017)
[7] Age Discrimination in Employment Act (ADEA), 29 U.S.C. § 621 et seq.
[8] Mobley v. Workday, Inc., No. 3:23-cv-00770 (N.D. Cal. July 12, 2024) (Order Granting in Part and Denying in Part Motion to Dismiss) (“Mobley Order”), slip op. at 14.
[9] Equal Employment Opportunity Commission v. iTutorGroup, Inc., et al., Civil Action No. 1:22-cv-02565 (E.D.N.Y.)
[10] Civil Rights Act of 1964, 42 U.S.C. § 2000e et seq.
[11] Griggs v. Duke Power Co., 401 U.S. 424 (1971).
[12] Fair Housing Act of 1968, 42 U.S.C. § 3601 et seq.
[13] Equal Credit Opportunity Act of 1974, 15 U.S.C. § 1691 et seq.
[14] Martinez, Emmanuel, and Lauren Kirchner. “The Secret Bias Hidden in Mortgage-Approval Algorithms – The Markup.” August 25, 2021. https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms.
[15] “Office of Public Affairs | Justice Department and Meta Platforms Inc. Reach Key Agreement as They Implement Groundbreaking Resolution to Address Discriminatory Delivery of Housing Advertisements | United States Department of Justice.” January 9, 2023. https://www.justice.gov/archives/opa/pr/justice-department-and-meta-platforms-inc-reach-key-agreement-they-implement-groundbreaking.
[16] “Preliminary Approval Of Settlement Granted In Ground-Breaking Discrimination Class Action Brought By GBLS, Cohen Milstein, And NCLC Against SafeRent Solutions On Behalf Of Rental Applicants | Greater Boston Legal Services.” Accessed October 31, 2025. https://www.gbls.org/news/preliminary-approval-settlement-granted-ground-breaking-discrimination-class-action-brought.
[17] Nedlund, Evelina. “Apple Card Is Accused of Gender Bias. Here’s How That Can Happen | CNN Business.” CNN, November 12, 2019. https://www.cnn.com/2019/11/12/business/apple-card-gender-bias.