A class-action lawsuit targeting HR and finance platform Workday Inc. may end up including millions of potential victims — and it offers a cautionary tale — and some important lessons — for companies that use artificial-intelligence tools in their hiring decisions.
The lawsuit, brought by Derek Mobley in the Northern District of California, claims that Workday’s AI-based applicant-recommendation system scored and then discriminated against job applicants on the basis of race, age and disability.
Mobley, who is over 40, along with other named plaintiffs, said they applied for hundreds of jobs with companies using Workday tools only to get silence in return. The lawsuit alleges that Workday’s tools created scores for each applicant and flagged for customers those candidates they should hire — contending that Workday acted as an “agent” of their customers.
Workday, in court documents, has denied the allegations and requested the lawsuit be dismissed. Workday has disputed that it offers “employment recommendations” at all to customers using its software and thus is not the one at fault for potential discrimination. Workday also said in court documents that “1.1 billion applications were rejected using Workday” during the time period at issue and “any notice would still invite potentially hundreds of millions of potential plaintiffs to file their claims in this case.”
Connor Spielmaker, principal, corporate communications for Workday, said in a statement that the lawsuit was without merit.
“Workday’s AI recruiting tools do not make hiring decisions or automatically reject candidates— hiring decisions are always made by our customers, who maintain full control and human oversight,” Spielmaker said. “These tools look only at the qualifications listed in a candidate’s job application and compare them with the qualifications the employer has identified as needed for the job.”
He said the programs are not trained to use, or even identify, protected characteristics like age, race or disability and that the court has already dismissed all claims of intentional discrimination — with no evidence that the technology results in harm to protected groups.
In May, a judge granted the lawsuit class-action status. The order granting class-action status stressed that Workday used the term “recommendation” on both its website and in documents provided early in the discovery process.
Workday has raised issues with whether there is a true “class” for the lawsuit, since there are so many factors involved in job applications, and how that class could be measured. The judge said those issues could be worked out in the course of the litigation.
Companies adjust hiring practices
The lawsuit creates a fundamental compliance challenge for businesses similar to lawsuits from “employment testing” from the 1970s to the 1990s. The Supreme Court, in Griggs v. Duke Power Co. and Albemarle Paper Co. v. Moody, ultimately ruled in 1971 that employment practices, even if neutral on their face, could be unlawful if they disproportionately harm specific groups. Employers found they could not simply rely on professional-developed tests without validating them for job-relatedness and disparate impact, and companies today remain fully liable for discrimination when using AI hiring tools.
“But most AI tools operate as ‘black boxes’ that make required bias testing nearly impossible to conduct,” said Brian Patterson, partner in the Houston office of law firm Bracewell LLP, in an email. “Employers will need to treat AI hiring tools like any other selection criteria and obtain confidence from AI vendors that these tools do not have disparate impact on protected groups.”
AI tools have grown tremendously in the few years since OpenAI’s ChatGPT took the world by storm in 2022. Many workers use chatbots for career and job advice, while many managers use ChatGPT and other tools to determine which workers should get raises and promotions. Experts, however, previously have warned about potential litigation when it comes to use of AI tools, including “hallucinations” that sometimes lead to a generative AI tool giving incorrect answers.
The Workday case could slow AI adoption among employers for certain uses until they become more comfortable that AI vendors have properly validated their systems, similar to how companies pivoted from use of certain cognitive and personality tests.
AI can still be used for administrative purposes in hiring — such as for parsing resume data into standardized formats, organizing material or scheduling interviews, since those actions are not involved in making decisions about individual candidates. But the moment AI is used to screen, test, rank or make any kind of recommendation, it becomes a selection tool that must be validated, Patterson said.
How AI is put to use
Sheldon Arora, CEO of StaffDNA, said the lawsuit shows that AI is only part of the solution in hiring and not the entire answer. It also underscores how there needs to be human supervision at every step along the way.
What AI can do right, Arora said, is match the right candidates to the right jobs, but if there are filters present based on what the company may prefer, the match will not be optimal and potentially could exclude candidates who do not fit a narrow criteria.
“This is where human decision-making and judgment comes into play. There’s a need to look beyond the data and see the nuances of a job candidate’s background and skills. Since humans program AI, it’s critical to have continuous oversight and minimize bias,” Arora said. “Employers should not be implementing AI in the hiring process and let it go completely unchecked.”
David Wimmer, attorney and partner at law firm Swerdlow Florence Sanchez Swerdlow & Wimmer, said there is no proof to date that Workday’s AI systems discriminated. The case likely will lead to a “battle of the experts” as each side tries to convince a jury of its argument.
The certification of the class-action status and a judge’s order to Workday to produce the names of its employer clients is a “huge inflection point” for the company, Wimmer said. Now the firm has to make a choice about how far to take this.
“Do they settle this case now to avoid having to go down that path of having to reveal their clients? Of having to send out notices to how many hundreds of millions of people? Do they say now is the time to cut our losses?” Wimmer said.
It’s not just Workday that faces potential liability. It’s all of the companies that used Workday’s systems, Wimmer said. If a jury finds there was disparate impact, that could expose those customers too.
What companies can and should do is make sure any agreement with a software vendor comes with defense and indemnification provisions and assurances from the vendor that the systems have been tested for disparate impact. Companies also need to do their own diligence, Wimmer said.
“Companies should be testing it,” he said. “Are they running through and checking, manually, some of the folks who have been passed on through, or some of the folks that have been passed on? Are they double checking to see if this makes sense?”
The Workday case is not the only lawsuit regarding AI in hiring. Additionally, some states have passed new regulations expressly prohibiting AI for automated employment decisions.
In the case of a California regulation, which goes into effect Oct. 1, companies will be required to retain records along with the regulation placing limitations on how AI systems in hiring can be used.
“That’s just going to lead to a whole new explosion of litigation,” Wimmer said.
Lisa Dawson