AI As An Employment Agent: What Mobley V. Workday Addresses, and What It Doesn’t
Introduction
In the late 20th century, the job hunt was revolutionized. No longer did candidates have to type or painstakingly hand-write all their qualifications onto a piece of paper, then bring it to a prospective employer; they could simply hop on the newly christened Internet, create a thread on a career board, and cast a lure out to a vast sea of potential employers with one click. Thirty years later, the job hunt is being reborn once again with automated hiring tools.
In February 2023, Derek Mobley filed a class action lawsuit in the District Court of Northern California against Workday Inc., an AI-powered software platform utilized broadly by corporations to filter applicants in the hiring process. In his complaint, Mobley claimed that he had been systematically discriminated against by Workday’s algorithm because he is Black, over 40 years old, and suffers from anxiety and depression. This algorithmic bias had apparently resulted in over 100 rejections from job positions using Workday’s platform since 2017. The lawsuit alleged that Workday’s AI applicant screening discriminates against potential candidates based on race, age and disability, in violation of Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA) of 1967, and the ADA Amendments Act (ADAAA) of 2008 [1]. Title VII prohibits discrimination based on race, color, religion, sex and national origin, while the ADEA protects workers aged 40 and older from age-based discrimination in hiring and firing practices. The ADAAA broadened the definition of "disability" under the Americans with Disabilities Act (ADA) to ensure greater protections for individuals with substantial physical or mental impairments.
Mobley’s first lawsuit, which alleged claims against Workday as an “employment agency,” was dismissed by the District Court of Northern California on the grounds that he did not sufficiently prove that Workday is an employment agency as defined by Anti-Discrimination Laws. In his amended complaint, he added an additional allegation: that Workday acts as an “agent” to employers, and can therefore be held responsible for employment discrimination. This idea is known as “agency theory.”
On March 12th, Workday filed a second motion to dismiss the amended complaint, which was evaluated by Judge Rita Lin. Though Judge Lin dismissed Mobley’s original claim that Workday acted as an employment agency, she did allow the possibility that Workday could be held liable under agency theory to proceed. This decision was supported by the Equal Employment Opportunity Commission (EEOC), which had filed an amicus brief in April urging the Court to deny Workday’s motion. The court wrote in their July 2024 decision, “The FAC [First Amended Complaint] plausibly alleges that Workday's customers delegate traditional hiring functions, including rejecting applicants, to the algorithmic decision-making tools provided by Workday,” and that “Workday is an agent of its client-employers, and thus falls within the definition of an employer for purposes of Title VII, the ADEA, and the ADA [2].” Accordingly, Workday could be held liable for discrimination under those same provisions.
Because the agency theory argument has survived the motion to dismiss, it will now proceed to the discovery phase of trial, where the plaintiff and defendant will collect evidence to prove or disprove it in court. In other words, the Mobley decision has paved new ground in the world of labor law by legitimizing a new way in which firms could potentially be held liable for discrimination. There had previously been no judicial decision about delegating hiring functions to AI software, something that, according to the ACLU, 99% of Fortune 500 companies are already doing [3]. As firms increasingly push hiring decisions onto automated systems in the name of efficiency, this decision is a stark declaration that using AI is not necessarily a “get out of jail free” card for employers, and agents of employers.
In this article, I examine the application of employment discrimination law to artificial intelligence systems through the lens of agency theory and recent jurisprudence. Part I is an overview of the historical development of agency theory in employment discrimination, focusing on landmark cases that established third-party liability for discriminatory hiring practices. These include Association of Mexican-American Educators v. California and Williams v. City of Montgomery. Part II analyzes the current challenges in establishing liability for AI-driven hiring discrimination, examining both disparate impact doctrine and negligence frameworks, while exploring legislative attempts like New York City's LL 144 to regulate automated employment decision tools. In conclusion, I argue that while Mobley v. Workday marks a significant step in extending traditional employment discrimination protections to AI systems, it does not provide a comprehensive framework for AI liability. Any framework for AI liability would need to navigate a legal minefield of questions about when and how humans can be held responsible for an AI’s decisions, particularly when taking into account the frequent and unforeseeable errors AI makes, and this is not something the Mobley ruling does. Instead, Mobley’s greatest value is in reaffirming the fundamental principle that employers cannot evade discrimination liability by delegating hiring functions to third parties, whether human or automated.
I. “Agency Theory,” A History
The idea that laws against employment discrimination extend beyond direct employers is one that stretches back before the use of artificial intelligence became commonplace. Throughout the 1990s, several rulings emerged establishing that non-employment entities that nevertheless exercised power over employment decisions were not immune from Congress’ employment discrimination prohibitions. The most relevant decision is perhaps Association of Mexican-American Educators V. California (2000).
In 1992, 15 teachers and three teachers’ associations filed a class action lawsuit against the State of California and the California Commission on Teaching Credentialing. They alleged that the California Basic Educational Skills Test (“CBEST”), which must be passed in order for any educator to be certified to teach in California, was discriminatory, not validated properly, and violated Titles VI and VII of the 1964 Civil Rights Act [4]. They pointed to the fact that since its inception, the CBEST had disproportionately failed minority educators relative to their white peers: California state records showed that the pass rate was 80% for whites and 35% for African-Americans. The defendants argued that the State of California could not be held liable because there was no direct employment relationship between them and the plaintiffs, because while the CBEST is a prerequisite for employment for prospective teachers, it is up to each individual school district to make hiring decisions.
However, Judge Susan Graber disagreed. In her opinion, she wrote, “A direct employment relationship is not a prerequisite to Title VII liability [...] Among other things, we have held that an entity that is not the direct employer of a Title VII plaintiff nevertheless may be liable if it ‘interferes with an individual's employment opportunities with another employer [5].” Ultimately, the court found that by requiring, formulating, and administering the CBEST, which was a barrier to employment at any public school, the state of California had sufficiently "interfered" with employment decisions to fall under Title VII's scope.
Multiple parallels can be drawn between California and Mobley that directly support Judge Lin's reasoning in Mobley v. Workday. Just as California created a gateway requirement that effectively filtered who could and couldn't become teachers, Workday's algorithmic screening creates similar barriers to employment. While California used a standardized test as its filtering mechanism and Workday uses AI-powered assessments, both entities essentially serve as gatekeepers, making preliminary decisions about which candidates can proceed in the hiring process. In both cases, these third parties exercise significant control over employment opportunities, despite not being direct employers themselves. Moreover, both systems were challenged for potentially discriminatory impacts on protected groups—the CBEST for its disproportionate impact on minority educators, and Workday's algorithm for alleged discrimination based on race, age, and disability.
Another case illuminating how courts have historically tackled third-party liability in employment decisions is Williams v. City of Montgomery (1984), wherein Tate Williams, a Black firefighter, sued the City of Montgomery and the Montgomery City-Council Personnel Board (“Board”) for discrimination. At the time, Montgomery’s fire department policy was to discharge any firefighter convicted of a felony. Of the three firefighters who had been convicted of a felony, two were white, and neither of them were discharged by the Board. However, Tate was discharged for his felony conviction. After the Court found that the Board’s reasons for firing Tate but keeping the other firefighters was “purely subjective” and violated Title VII, the Board argued that it was not an employment agency and thus could not be held liable. However, the Court determined that the Board was an agent of the City because it had been given power to “exercise duties traditionally reserved for the employer”: formulating minimum standards for employees, evaluating employees, and hiring or firing employees. As a result, the Board was liable under Title VII [6]. It is indisputable that the Workday platform, which filters out and prioritizes certain candidates for employment, has also been delegated rights traditionally reserved for employers. Therefore, according to the precedent established by Williams, it should also be considered an employer under agency theory.
For decades, it has been recognized that agents used by employers for hiring decisions are not allowed to quietly slip through Title VI and Title VII protections any more than the employers themselves. Both Ass’n of Mexican-American Educators and Williams show a judicial willingness to assign liability to the employer for discriminatory conduct of their agent when necessary, a trend that the Mobley ruling suggests will continue even through technological advances.
II. The Puzzle of AI Liability
What broader implications does the Mobley v. Workday ruling suggest for the employers that use these technologies in their hiring practices? The answer is not so clear-cut. One of the legal obligations of an employer is to ensure their hiring practices do not have a disparate impact, which occurs when a “policy or practice fall[s] disproportionately” on a group covered by the Civil Rights Act [7]. However, it is incredibly difficult for a plaintiff to compile enough evidence to prove disparate impact from an algorithmic model. It is not enough to show that the algorithm was trained off a biased data set that over or underrepresented certain protected groups: the plaintiffs must also “prove that the employer created or has access to the model, can discover that there is discriminatory effect, and can discover the particular mechanism by which that effect operates.” Overall, plaintiffs in disparate impact cases have only around a 25% success rate in district courts, and a 19% success rate in the court of appeals [8].
In Mobley v. Workday, Judge Lin denied Workday’s motion to dismiss Mobley’s disparate impact claims, allowing the claims to proceed. However, this does not mean that Mobley successfully proved disparate impact—it simply meant that the Judge ruled that the claim should be allowed to make it to trial, which has a significantly lower legal threshold. Even then, Mobley was only able to allege a disparate impact because he had applied to over 100 jobs through the Workday platform in a variety of different fields, which Judge Lin writes is analogous “to having over one hundred qualified applicants like Mobley all strike out for jobs with one employer [9].” It is easy to imagine a scenario where a plaintiff does not even have enough evidence to plead disparate impact, much less prove it.
Even if plaintiffs manage to establish disparate impact, employers can point to the business necessity defense, which allows them to use discriminatory hiring methods when it is necessary to essential operations of the business. Griggs v. Duke Power Co. (1971) defines ‘essential operations’ as following: the discriminatory practice must “be related to job performance,” have a “relationship to the employment in question,” be “demonstrably a reasonable measure of job performance,” have a “relationship to job-performance ability,” or “must measure the person for the job and not the person in the abstract [10].” Following this definition, if a data model discriminates based on a legitimately job-related trait such as previous job tenure, it would likely be allowed under the business necessity defense. In this case, the plaintiff would only have one more recourse––to claim that the employment could have used an “alternative employment practice” with less discriminatory results (for example, removing or lessening the bias in the model). This, too, would be difficult to argue, because removing bias is a tall order. It might require additional data to be collected and fed to the model, or the implementation of algorithmic safeguards, both of which are routes the employer could claim are prohibitively costly. In that case, the court might rule that the employer is not “refusing” to use an alternative employment practice, but that they are simply not able to.
Rather than relying on the doctrine of disparate impact, which is frustratingly difficult to prove, some scholars have begun to interpret Title VII discrimination under the doctrine of negligence. Liability under the doctrine of negligence does not require proving an intent to discriminate, nor does it impose the kind of strict liability that disparate impact does.
An individual is liable for negligent behavior when they breach their duty of care—accordingly, one could argue that an employer who makes no effort to ensure their hiring practices are not discriminatory, is being negligent. As Professor Andrés Páez writes in Negligent Algorithmic Discrimination, “Negligence is a breach of our duty to protect others ... An employer’s failure to exercise due care in the manner of choosing employees, or maintaining or terminating their employment, is a breach of that duty [11].” But negligence law is not a perfect legal solution either. Because of AI’s black-box nature and the inscrutable thickets of code that separate its inputs from outputs, it is difficult to maintain that an employer should have or even could have reasonably foreseen that an algorithm would cause harm. Negligence law fundamentally “no longer works” within the paradigm of AI liability, argues Professor Andrew Selbst of UCLA Law in Negligence and AI’s Human Users [12].
Some legislation has taken aim at these issues by requiring external oversight. New York City’s new algorithmic bias audit law, LL 144, obligates employers to submit to annual independent audits for their Automated Employment Decision Tools (AEDTs), make the results publicly available, and send notice of their use of AEDTs to any employees and job candidates residing in New York [13]. These notices can either be made individually through mail or email, or a general notice can be posted on the employment page of the firm’s website. LL 144 has been alternately derided as too heavy-handed by businesses and “underinclusive” by groups such as the Surveillance Technology Oversight Project [14]. Its wording, some argue, is overly lenient—it requires notices only in situations where 1) the AEDT’s output is the only factor considered, 2) its output is the most heavily weighted factor considered, or 3) its output can override decisions made from other factors like human opinion [15]. This ambiguous phrasing, such as the lack of definition for what overriding other factors entails, and the inherent difficulty in proving any violations, leaves the vast majority of employers who use these tools completely scot-free. This might explain why a Cornell study found that only 4% of NYC-based job listings actually posted their required notices [16]. Clearly, legislators’ first stab at the balancing act between preserving businesses’ ability to operate and eliminating discrimination, does not fully address the deep-rooted problem of enforcing liability doctrine in circumstances so technologically removed from the ones liability law was originally conceived in.
Ultimately, Mobley v. Workday represents an important but limited advancement in addressing the AI liability puzzle. Though Judge Lin allowed Mobley to plead disparate impact claims, it was arguably a procedural victory—permitting the case to proceed rather than establishing substantive standards for proving discrimination. As noted, Mobley's unique circumstance of having applied to over 100 jobs through Workday created a rare argumentative advantage most plaintiffs would not possess. Even as Mobley expands who can be held liable, it offers little guidance on how to overcome the challenges in proving algorithmic discrimination. Neither traditional disparate impact doctrine, which requires extensive statistical evidence often inaccessible to plaintiffs, nor emerging negligence frameworks, which struggle with AI's inherent opacity, find resolution in the Mobley ruling. Similarly, legislative efforts like New York’s LL 144 reveal the practical difficulties in enforcing accountability. This patchwork of spotty regulation and judicial gray areas is why Mobley represents only a first step toward addressing algorithmic discrimination.
Conclusion
Mobley v. Workday marks a notable, yet narrow, advancement in employment discrimination law's application to AI systems. By applying agency theory to AI software, Mobley prevents employers from evading discrimination law by outsourcing their hiring decisions to platforms like Workday, and concretely establishes that AI employment agents can be held legally liable for discrimination. However, it doesn't provide guidance on the equally crucial question of whether that liability would be established through a disparate impact framework, a negligence framework, or something else entirely. With that in mind, Mobley's greatest contribution to labor law may be more subtle. Its court order emphasizes a critical point: "Employers cannot escape liability for discrimination by delegating their traditional functions, like hiring, to a third party." Whether that third party is human, like in Ass'n of Mexican-American Educators and Williams, or automated, is irrelevant. We can only hope that the precedent set by Mobley—a continuation of the promise of equal opportunity enshrined in law by the 1964 Civil Rights Act, and a startling reaffirmation of justice—will remain at the heart of future rulings.
Acknowledgements
I am incredibly grateful to Amy Sepinwall, J.D., Ph.D., Associate Professor of Legal Studies and Business Ethics at the Wharton School, for her continued guidance and mentorship throughout the process of writing this article. This submission would not have been possible without her feedback, perspective, and encouragement.
Bibliography
[1] Mobley v. Workday, Inc., 4:23-cv-00770-YGR (N.D. Cal. Jul. 25, 2023)
[2] Ibid.
[3] “Accountability in Artificial Intelligence.” American Civil Liberties Union, September 18th, 2024. https://www.aclu.org/issues/racial-justice/accountability-in-artificial-intelligence.
[4] Association of Mexican-American Educators v. California, 183 F.3d 1055 (9th Cir. 1999)
Williams v. City of Montgomery, 742 F.2d 586 (11th Cir. 1984)
[5] Ibid.
[6] “Section VII- Proving Discrimination- Disparate Impact.” Civil Rights Division | Section VII- Proving Discrimination- Disparate Impact | United States Department of Justice, October 30, 2023. https://www.justice.gov/crt/fcs/T6Manual7.
[7] “12.11 ADA-Defenses-Business Necessity.” 12.11 ADA-Defenses-Business Necessity | Model Jury Instructions. Accessed October 5, 2024. https://www.ce9.uscourts.gov/jury-instructions/node/220.
[8] Barocas, Solon and Selbst, Andrew D., Big Data's Disparate Impact (2016). 104 California Law Review 671 (2016), Available at SSRN: https://ssrn.com/abstract=2477899 or http://dx.doi.org/10.2139/ssrn.2477899
[9] Mobley v. Workday, Inc., 4:23-cv-00770-YGR (N.D. Cal. Jul. 25, 2023)
[10] Griggs v. Duke Power Co., 401 U.S. 424 (1971)
[11] Andrés Páez, Negligent Algorithmic Discrimination, 84 Law and Contemporary Problems 19-33 (2021) Available at: https://scholarship.law.duke.edu/lcp/vol84/iss3/3
[12] Selbst, Andrew D., Negligence and AI's Human Users (March 11, 2019). 100 Boston University Law Review 1315 (2020), UCLA School of Law, Public Law Research Paper No. 20-01, Available at SSRN: https://ssrn.com/abstract=3350508
[13] 2021 N.Y.C. Local Law No. 144, N.Y.C. Admin. Code. § 20-870.
[14] Communications Staff, “Letter to Council Member Cumbo on Amending Intro 1894-2020,” S.T.O.P. - The Surveillance Technology Oversight Project, April 26, 2021, https://www.stopspying.org/sign-on-letters/2021/4/22/letter-to-council-member-cumbo-on-intro-1984-2020-the-need-for-a-moratorium.
[15] Rules of City of New York Department of Consumer and Worker Protection (6 RCNY) § 5-300
[16] Wright, Lucas, Roxana Mika Muenster, Briana Vecchione, Tianyao Qu, Senhuang (Pika) Cai, Alan Smith, Jake Metcalf, and J. Nathan Matias. 2024. “Null Compliance: NYC Local Law 144 and the Challenges of Algorithm Accountability.” OSF. https://doi.org/10.17605/OSF.IO/UPFDK.