AI & HR: Algorithmic Discrimination in the Workplace

21 Nov 2024

Illustration of a robot using personal computing devices

(Source)

The Emergence of AI in HR Practices Artificial intelligence (AI) is quickly reshaping the way human resources (HR) departments make decisions in the workplace. In particular, AI is currently redefining key HR practices, including recruitment, selection, onboarding, performance management, and training and development. On the surface, the use of AI in HR offers a myriad of benefits: increased efficiency through streamlined HR processes, better decision-making through precise prediction and analysis, and improved employee productivity through AI personalization. Importantly, the integration of AI technologies in HR, like machine learning and natural language processing, aims to mitigate bias as well. However, plaintiffs, scholars, and others argue the opposite: AI may actually perpetuate and amplify biases in HR practices. According to the Society of Human Resource Management, around 1 in 4 employers use AI in their HR practices, and among the organizations using AI for HR purposes, talent acquisition is the leading area for its use at 64%. Further, according to Gartner, 76% of HR leaders believe that their organization will be trailing behind in organizational success if they fail to implement AI in the next 1 to 2 years. Thus, the adoption of AI in HR is already significant, and its future prevalence is almost undeniable. What does this new reality mean for employees, employers, and the courts? AI’s Potential for Algorithmic Bias and Discrimination Between 2014 and 2018, Amazon developed a resume-scanning tool that utilized AI for recruitment. Amazon trained this tool on previously recruited candidates’ credentials to better identify and rank qualified applicants. However, Amazon engineers discovered that the tool systematically downgraded resumes submitted by female candidates. Although the gender of these applicants was never explicitly provided, the AI tool used “indirect markers, such as ‘captain of the women’s chess club’ as proxies” to identify which applicants were female and effectively screen them out. Research shows that AI suffers from algorithmic bias by reproducing and amplifying human biases. Amazon’s AI recruitment tool discriminated against applicants on the basis of gender because of the data it trained on. The resume scanning tool defined the “ideal employee” based on historically biased data, which consisted of resumes predominantly submitted by men in the past. Thus, the training data organizations use in their AI-powered HR tools risk reflecting historical or present biases instead of focusing solely on an applicant’s skills and qualifications. In other words, if the data input is biased, the output will likely be biased. Considering the fact that human bias has historically plagued recruitment and selection processes, policymakers must understand the risk of algorithmic bias to ensure a fair and equitable workplace. Algorithmic Discrimination in the Context of Law and Policy The issue of algorithmic discrimination is already appearing in courts across the United States. For example, in Saas v. Major, Lindsey & Africa, LLC, a plaintiff alleged that Major, Lindsey & Africa, a recruiting firm, used “algorithmic, machine learning, and other technical tools in the conduct of their business, and their use of such tools caused [her] to be unlawfully discriminated against on the basis” of her sex and age. The plaintiff asserted claims of “failure to refer and ‘algorithmic bias’ in violation of [Title VII] and the [ADEA]; retaliation in violation of Title VII and the ADEA; and fraudulent inducement in violation of Maryland law.” However, a district court in the District of Maryland dismissed the “algorithmic bias” claim because the plaintiff’s allegation that the recruiting firm used AI was too speculative. Further, in Mobley v. Workday, Inc., a plaintiff alleged that Workday, a human resource management service, used algorithmic decision-making tools “to screen applicants in [the] hiring process [that] discriminated against him and similarly situated job applicants on the basis of race, age, and disability.” The plaintiff’s applications allegedly listed his degree from a historically Black college and, in some applications, included personality tests that could indicate his mental health disorders. The plaintiff asserted that Workday’s AI tools relied on biased training data, prompting him to bring “disparate impact and disparate treatment claims under Title VII, the [ADEA] and the [ADA].” Although a district court in the Northern District of California dismissed the disparate treatment claim, that court allowed the disparate impact claim to proceed because the plaintiff’s complaint “support[ed] a plausible inference that Workday’s screening algorithms were automatically rejecting Mobley’s applications based on a factor other than his qualifications, such as a protected trait.” The Mobley case teaches us that organizations cannot escape liability by using AI systems for their HR decisions. Notably, the Mobley court asserted that “[d]rawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era.” Accordingly, the Mobley case also teaches us that both developers and users of AI tools may be held liable for discrimination under existing law. However, the Saas case shows us that it may be difficult for plaintiffs to succeed on claims of algorithmic discrimination in the workplace under law. Thus, ambiguity exists among employees, employers, and the courts on issues similar to those introduced in Saas and Mobley. This ambiguity exists in large part because the United States does not have any comprehensive legislation regulating the use of AI. Instead, AI-related actions like Executive Order 14110, issued by President Biden in 2023, and the White House’s blueprint for an AI Bill of Rights exist. Both of these actions acknowledge the impact of algorithmic discrimination and provide hope for a more fair and equitable workplace under the emergence of AI. For example, Executive Order 14110 states that “[i]t is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse,” and the White House’s blueprint for an AI Bill of Rights devotes an entire section to “Algorithmic Discrimination Protections.” This section states that individuals should not face algorithmic discrimination and that users and developers of automated systems should use and design these systems in an equitable way. While these statements are promising, vulnerable populations will continue to face algorithmic discrimination in the workplace without any federal legislation on the matter. In the absence of federal legislation, some states and localities—including Illinois, New York City, Colorado, and California—are attempting to regulate how employers use AI in HR decisions to prevent discrimination. Additionally, federal agencies, including the EEOC and the DOL, have issued initiatives, guidance, and other materials in an attempt to clarify that the use and design of AI-powered HR tools may result in discrimination in violation of the law. The Law’s Role in Mitigating Algorithmic Discrimination Some may argue that the issue of algorithmic discrimination should be left to the states, and others may argue that federal legislation on the matter will stifle innovation in the workplace. However, algorithmic discrimination is an issue worthy of comprehensive federal recognition. Prohibiting discrimination in employment decisions is a cornerstone of U.S. employment law, and the federal government should do all it can to curb the detrimental effects that employment discrimination has on individuals, organizations, and society. Therefore, Congress must pass comprehensive federal legislation on the issue of AI use in the workplace, with particular attention to algorithmic discrimination. While the use of AI in HR offers incredible benefits to employees and employers, the power of AI can also propound employment discrimination in a way never seen before. AI’s capability to drive discrimination in both a systematic and unbridled fashion differs substantially from anything achievable by a human decision-maker. Thus, the unique nature of algorithmic discrimination renders traditional Title VII frameworks inadequate for regulation. A new legal framework, influenced by existing employment law yet specifically tailored to address the complex forms of discrimination that AI poses, can help mitigate this burgeoning issue. Further, if the law directs developers to build features into their AI systems that enable better regulation, the government can help make AI-powered HR tools more regulable. Nonetheless, employers seeking to use AI in their HR practices should be proactive in assessing how AI-powered tools were developed and trained. Further, developers of automated HR systems should utilize the power of AI to create algorithmic inclusion and take the steps necessary to prevent any possibility of perpetuating systematic discrimination. If Congress were to pass comprehensive federal legislation on this matter, employers and developers alike might find themselves taking steps like these to ensure a workplace free from discrimination. Suggested Citation: Kadin Mesriani, AI & HR: Algorithmic Discrimination in the Workplace, Cornell J.L. & Pub. Pol’y, The Issue Spotter, (Oct. 31, 2024), https://jlpp.org/ai-hr-algorithmic-discrimination-in-the-workplace.
Kadin Mesriani is a second-year law student at Cornell Law School. He graduated from Cornell University with a degree in Industrial and Labor Relations. In addition to his involvement with Cornell’s Journal of Law and Public Policy, Kadin serves as the Vice President of Cornell’s Middle Eastern and North African Law Students Association and as an Honors Fellow in Cornell’s Lawyering program.