 {"id":5111,"date":"2024-11-21T14:36:00","date_gmt":"2024-11-21T14:36:00","guid":{"rendered":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/?p=5111"},"modified":"2025-10-20T18:22:02","modified_gmt":"2025-10-20T18:22:02","slug":"ai-hr-algorithmic-discrimination-in-the-workplace","status":"publish","type":"post","link":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/2024\/11\/21\/ai-hr-algorithmic-discrimination-in-the-workplace\/","title":{"rendered":"AI &amp; HR: Algorithmic Discrimination in the Workplace"},"content":{"rendered":"<p style=\"text-align: center\">(<a href=\"https:\/\/hrdailyadvisor.blr.com\/2019\/06\/28\/5-ways-ai-can-help-hr-better-recruit\/\">Source<\/a>)<\/p>\n<p><strong>The Emergence of AI in HR Practices<\/strong><\/p>\n<p>Artificial intelligence (AI) is quickly reshaping the way human resources (HR) departments make decisions in the workplace. In particular, <a href=\"https:\/\/www.forbes.com\/sites\/bernardmarr\/2023\/11\/02\/how-data-and-ai-are-reshaping-contemporary-hr-practices\/\"><em>AI is currently redefining key HR practices<\/em><\/a>, including recruitment, selection, onboarding, performance management, and training and development. On the surface, the use of <a href=\"https:\/\/doi.org\/10.1016\/j.jjimei.2023.100208\"><em>AI in HR offers a myriad of benefits<\/em><\/a>: increased efficiency through streamlined HR processes, better decision-making through precise prediction and analysis, and improved employee productivity through AI personalization. Importantly, the integration of AI technologies in HR, like machine learning and natural language processing, <a href=\"https:\/\/www.ibm.com\/think\/topics\/ai-in-hr\"><em>aims to mitigate bias<\/em><\/a> as well. However, plaintiffs, scholars, and others argue the opposite: AI may actually <a href=\"https:\/\/doi.org\/10.1016\/j.dajour.2023.100249\"><em>perpetuate and amplify biases<\/em><\/a> in HR practices.<\/p>\n<p>According to the Society of Human Resource Management, <a href=\"https:\/\/www.shrm.org\/topics-tools\/news\/technology\/ai-adoption-hr-is-growing\"><em>around 1 in 4 employers use AI<\/em><\/a> in their HR practices, and among the organizations using AI for HR purposes, <a href=\"https:\/\/www.shrm.org\/topics-tools\/news\/technology\/ai-adoption-hr-is-growing\"><em>talent acquisition is the leading area<\/em><\/a> for its use at 64%. Further, according to Gartner, <a href=\"https:\/\/www.gartner.com\/en\/human-resources\/topics\/artificial-intelligence-in-hr\"><em>76% of HR leaders<\/em><\/a> believe that their organization will be trailing behind in organizational success if they fail to implement AI in the next 1 to 2 years. Thus, the adoption of AI in HR is already significant, and its future prevalence is almost undeniable. What does this new reality mean for employees, employers, and the courts?<\/p>\n<p><strong>AI\u2019s Potential for Algorithmic Bias and Discrimination<\/strong><\/p>\n<p>Between 2014 and 2018, <a href=\"https:\/\/www.cogitatiopress.com\/socialinclusion\/article\/view\/7471\/3747\"><em>Amazon developed a resume-scanning tool<\/em><\/a> that utilized AI for recruitment. Amazon trained this tool on previously recruited candidates\u2019 credentials to better identify and rank qualified applicants. However, Amazon engineers discovered that the tool systematically downgraded resumes submitted by female candidates. Although the gender of these applicants was never explicitly provided, the AI tool used \u201c<a href=\"https:\/\/www.cogitatiopress.com\/socialinclusion\/article\/view\/7471\/3747\"><em>indirect markers, such as \u2018captain of the women\u2019s chess club\u2019 as proxies<\/em><\/a>\u201d to identify which applicants were female and effectively screen them out.<\/p>\n<p>Research shows that <a href=\"https:\/\/doi.org\/10.1111\/1748-8583.12511\"><em>AI suffers from algorithmic bias<\/em><\/a> by reproducing and amplifying human biases. Amazon\u2019s AI recruitment tool discriminated against applicants on the basis of gender because of the data it trained on. The resume scanning tool defined the <a href=\"https:\/\/doi.org\/10.1111\/1748-8583.12511\"><em>\u201cideal employee\u201d<\/em><\/a> based on historically biased data, which consisted of resumes predominantly submitted by men in the past. Thus, the training data organizations use in their AI-powered HR tools <a href=\"https:\/\/doi.org\/10.17645\/si.v12.7471\"><em>risk reflecting historical or present biases<\/em><\/a> instead of focusing solely on an applicant\u2019s skills and qualifications. In other words, if the data input is biased, the <a href=\"https:\/\/doi.org\/10.1016\/j.jjimei.2023.100165\"><em>output will likely be biased<\/em><\/a>. Considering the fact that <a href=\"https:\/\/doi.org\/10.1111\/1748-8583.12511\"><em>human bias<\/em><\/a> has historically plagued recruitment and selection processes, policymakers must understand the <a href=\"https:\/\/doi.org\/10.1016\/j.jjimei.2023.100165\"><em>risk of algorithmic bias<\/em><\/a> to ensure a fair and equitable workplace.<strong> <\/strong><\/p>\n<p><strong>Algorithmic Discrimination in the Context of Law and Policy <\/strong><\/p>\n<p>The issue of algorithmic discrimination is already appearing in courts across the United States. For example, in <a href=\"https:\/\/plus.lexis.com\/document?pdmfid=1530671&amp;pddocfullpath=%2Fshared%2Fdocument%2Fcases%2Furn%3AcontentItem%3A6C0M-FD83-RRXT-G1CX-00000-00&amp;pdcontentcomponentid=6414&amp;pdislparesultsdocument=false&amp;prid=714f2431-67b5-4d65-9329-e56c2d08bf64&amp;crid=ed2d65e9-0c7f-48f7-ae25-e5543b232e3b&amp;pdisdocsliderrequired=true&amp;pdpeersearchid=443d4d1d-47c8-4b9e-89d6-1f1e65f9e8a0-1&amp;ecomp=b7ttk&amp;earg=sr0#\/document\/f4d2a0f7-9924-4c5d-9bc5-d457a4a5fec3\"><em>Saas v. Major, Lindsey &amp; Africa, LLC<\/em><\/a>, a plaintiff alleged that Major, Lindsey &amp; Africa, a recruiting firm, used \u201calgorithmic, machine learning, and other technical tools in the conduct of their business, and their use of such tools caused [her] to be unlawfully discriminated against on the basis\u201d of her sex and age. The plaintiff asserted claims of \u201cfailure to refer and \u2018algorithmic bias\u2019 in violation of [Title VII] and the [ADEA]; retaliation in violation of Title VII and the ADEA; and fraudulent inducement in violation of Maryland law.\u201d However, a district court in the District of Maryland dismissed the \u201calgorithmic bias\u201d claim because the plaintiff\u2019s allegation that the recruiting firm used AI was too speculative.<\/p>\n<p>Further, in <a href=\"https:\/\/plus.lexis.com\/document?pdmfid=1530671&amp;pddocfullpath=%2Fshared%2Fdocument%2Fcases%2Furn%3AcontentItem%3A6CH7-BN83-RSV0-B52H-00000-00&amp;pdcontentcomponentid=6419&amp;pdislparesultsdocument=false&amp;prid=f5f0355b-7e95-46e8-9edd-60569845d57d&amp;crid=746d3d00-813f-4adc-80f6-32737d39243f&amp;pdisdocsliderrequired=true&amp;pdpeersearchid=f11c4278-b4a5-4687-8afa-7c24332a4e92-1&amp;ecomp=b7ttk&amp;earg=sr0#\/document\/c9ef4f61-0312-4eb8-b0bd-343c80591a6c\"><em>Mobley v. Workday, Inc.<\/em><\/a>, a plaintiff alleged that Workday, a human resource management service, used algorithmic decision-making tools \u201cto screen applicants in [the] hiring process [that] discriminated against him and similarly situated job applicants on the basis of race, age, and disability.\u201d The plaintiff\u2019s applications allegedly listed his degree from a historically Black college and, in some applications, included personality tests that could indicate his mental health disorders. The plaintiff asserted that <a href=\"https:\/\/www.law360.com\/articles\/1873091\/workday-ai-bias-suit-suggests-hiring-lessons-for-employers\"><em>Workday\u2019s AI tools relied on biased training data<\/em><\/a>, prompting him to bring \u201cdisparate impact and disparate treatment claims under Title VII, the [ADEA] and the [ADA].\u201d Although a district court in the Northern District of California dismissed the disparate treatment claim, that court allowed the disparate impact claim to proceed because the plaintiff\u2019s complaint \u201csupport[ed] a plausible inference that Workday\u2019s screening algorithms were automatically rejecting Mobley\u2019s applications based on a factor other than his qualifications, such as a protected trait.\u201d<\/p>\n<p>The <em>Mobley<\/em> case teaches us that organizations <a href=\"https:\/\/www.law360.com\/articles\/1873091\/workday-ai-bias-suit-suggests-hiring-lessons-for-employers\"><em>cannot escape liability<\/em><\/a> by using AI systems for their HR decisions. Notably, the <em>Mobley<\/em> court asserted that \u201c[d]rawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era.\u201d Accordingly, the <em>Mobley<\/em> case also teaches us that both developers and users of AI tools may be <a href=\"https:\/\/www.law360.com\/articles\/1873091\/workday-ai-bias-suit-suggests-hiring-lessons-for-employers\"><em>held liable for discrimination<\/em><\/a> under existing law. However, the <em>Saas<\/em> case shows us that it may be difficult for plaintiffs to succeed on <a href=\"https:\/\/online.law.tulane.edu\/blog\/artificial-intelligence-on-hr-processes\"><em>claims of algorithmic discrimination<\/em><\/a> in the workplace under law. Thus, ambiguity exists among employees, employers, and the courts on issues similar to those introduced in <em>Saas<\/em> and <em>Mobley<\/em>.<\/p>\n<p>This ambiguity exists in large part because the United States does not have any <a href=\"https:\/\/online.law.tulane.edu\/blog\/artificial-intelligence-on-hr-processes\"><em>comprehensive legislation<\/em><\/a> regulating the use of AI. Instead, AI-related actions like <a href=\"https:\/\/www.whitehouse.gov\/briefing-room\/presidential-actions\/2023\/10\/30\/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\/\"><em>Executive Order 14110<\/em><\/a>, issued by President Biden in 2023, and the White House\u2019s blueprint for an <a href=\"https:\/\/whitehouse.gov\/ostp\/ai-bill-of-rights\/#discrimination\"><em>AI Bill of Rights<\/em><\/a> exist. Both of these actions acknowledge the impact of algorithmic discrimination and provide hope for a more fair and equitable workplace under the emergence of AI. For example, Executive Order 14110 states that \u201c[i]t is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse,\u201d and the White House\u2019s blueprint for an AI Bill of Rights devotes an entire section to \u201cAlgorithmic Discrimination Protections.\u201d This section states that individuals <em>should<\/em> not face algorithmic discrimination and that users and developers of automated systems <em>should <\/em>use and design these systems in an equitable way. While these statements are promising, vulnerable populations will continue to face algorithmic discrimination in the workplace without any federal legislation on the matter.<\/p>\n<p>In the absence of federal legislation, some states and localities\u2014<a href=\"https:\/\/www.law360.com\/articles\/1873091\/workday-ai-bias-suit-suggests-hiring-lessons-for-employers\"><em>including Illinois, New York City, Colorado, and California<\/em><\/a>\u2014are attempting to regulate how employers use AI in HR decisions to prevent discrimination. Additionally, federal agencies, including the <a href=\"https:\/\/www.eeoc.gov\/joint-statement-enforcement-civil-rights-fair-competition-consumer-protection-and-equal-0\"><em>EEOC<\/em><\/a> and the <a href=\"https:\/\/www.dol.gov\/general\/AI-Principles\"><em>DOL<\/em><\/a>, have issued initiatives, guidance, and other materials in an attempt to clarify that the use and design of AI-powered HR tools may result in discrimination in violation of the law.<\/p>\n<p><strong>The Law\u2019s Role in Mitigating Algorithmic Discrimination<\/strong><\/p>\n<p>Some may argue that the issue of algorithmic discrimination should be left to the states, and others may argue that federal legislation on the matter will stifle innovation in the workplace. However, algorithmic discrimination is an issue worthy of comprehensive federal recognition. Prohibiting discrimination in employment decisions is a cornerstone of U.S. employment law, and the federal government should do all it can to curb the detrimental effects that employment discrimination has on individuals, organizations, and society. Therefore, Congress must pass comprehensive federal legislation on the issue of AI use in the workplace, with particular attention to algorithmic discrimination. While the use of AI in HR offers incredible benefits to employees and employers, the power of AI can also propound employment discrimination in a way never seen before.<\/p>\n<p>AI\u2019s capability to drive discrimination in both a systematic and unbridled fashion differs substantially from anything achievable by a human decision-maker. Thus, the unique nature of algorithmic discrimination renders traditional Title VII frameworks inadequate for regulation. A new legal framework, influenced by existing employment law yet specifically tailored to address the complex forms of discrimination that AI poses, can help mitigate this burgeoning issue. Further, if the law directs developers to build features into their AI systems that <a href=\"https:\/\/cyber.harvard.edu\/works\/lessig\/LNC_Q_D2.PDF\"><em>enable better regulation<\/em><\/a>, the government can help make AI-powered HR tools more regulable.<\/p>\n<p>Nonetheless, employers seeking to use AI in their HR practices should <a href=\"https:\/\/www.law360.com\/articles\/1873091\/workday-ai-bias-suit-suggests-hiring-lessons-for-employers\"><em>be proactive<\/em><\/a> in assessing how AI-powered tools were developed and trained. Further, developers of automated HR systems should utilize the power of AI to create <a href=\"https:\/\/doi.org\/10.1111\/1748-8583.12511\"><em>algorithmic inclusion<\/em><\/a> and take the steps necessary to prevent any possibility of perpetuating systematic discrimination. If Congress were to pass comprehensive federal legislation on this matter, employers and developers alike might find themselves taking steps like these to ensure a workplace free from discrimination.<\/p>\n<p>Suggested Citation: Kadin Mesriani, <em>AI &amp; HR: Algorithmic Discrimination in the Workplace<\/em>, Cornell J.L. &amp; Pub. Pol\u2019y, The Issue Spotter, (Oct. 31, 2024), <a href=\"https:\/\/jlpp.org\/ai-hr-algorithmic-discrimination-in-the-workplace\">https:\/\/jlpp.org\/ai-hr-algorithmic-discrimination-in-the-workplace<\/a>.<\/p>\n<p> <\/p>\n<p> <\/p>\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-bottom is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:100%\"><\/div>\n<\/div>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"180\" height=\"196\" src=\"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-content\/uploads\/sites\/3\/2024\/10\/Kadin-Mesriani-Headshot.jpg\" alt=\"\" class=\"wp-image-4714\" style=\"width:220px;height:auto\" \/><figcaption class=\"wp-element-caption\">Kadin Mesriani is a second-year law student at Cornell Law School. He graduated from Cornell University with a degree in Industrial and Labor Relations. In addition to his involvement with Cornell\u2019s Journal of Law and Public Policy, Kadin serves as the Vice President of Cornell\u2019s Middle Eastern and North African Law Students Association and as an Honors Fellow in Cornell\u2019s Lawyering program.<\/figcaption><\/figure>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>(Source) The Emergence of AI in HR Practices Artificial intelligence (AI) is quickly reshaping the way human resources (HR) departments make decisions in the workplace. In particular, AI is currently redefining key HR practices, including recruitment, selection, onboarding, performance management, and training and development. On the surface, the use of AI in HR offers a&#8230;<\/p>\n","protected":false},"author":1,"featured_media":4768,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[18],"tags":[],"class_list":["post-5111","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-feature"],"acf":[],"_links":{"self":[{"href":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-json\/wp\/v2\/posts\/5111","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-json\/wp\/v2\/comments?post=5111"}],"version-history":[{"count":5,"href":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-json\/wp\/v2\/posts\/5111\/revisions"}],"predecessor-version":[{"id":5157,"href":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-json\/wp\/v2\/posts\/5111\/revisions\/5157"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-json\/wp\/v2\/media\/4768"}],"wp:attachment":[{"href":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-json\/wp\/v2\/media?parent=5111"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-json\/wp\/v2\/categories?post=5111"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/publications.lawschool.cornell.edu\/jlpp\/wp-json\/wp\/v2\/tags?post=5111"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}