Fraud Blocker

Serving California, Ohio, Pennsylvania, and Illinois with COVID-19 precautions in place and convenient virtual meetings.

Can AI Discriminate Against Potential Employees?

As artificial intelligence (AI) and automated decision systems (ADS) become more embedded in recruitment, employers are witnessing the potential of these technologies to streamline hiring and boost efficiency. However, the rapid adoption of AI in hiring also raises a significant concern: Can AI discriminate against potential employees? The answer is complex. While AI systems can be programmed to act impartially, they can also unintentionally perpetuate or even amplify biases, particularly when not adequately designed or monitored. 

How AI Works in Hiring

AI in recruitment is largely powered by machine learning (ML) algorithms trained to analyze data. These algorithms sift through resumes, evaluate qualifications, predict job performance, and even conduct interviews in some cases. By processing data in structured formats, these systems can handle high volumes of applicants, helping employers make faster decisions.

However, machine learning relies heavily on data sets from past hiring decisions, which may reflect existing biases if they aren’t carefully managed. For example, if an algorithm is trained on historical hiring data that favored specific demographics or excluded certain groups, it can learn to replicate these biases, even if they’re unintentional.

How AI Can Discriminate: Key Bias Mechanisms

AI and ADS can produce biased results due to several key factors:

  1. Biased Training Data: Machine learning algorithms are only as objective as the data they are trained on. If an employer’s historical data includes biases favoring certain groups over others, an AI trained on that data can adopt and even amplify those preferences. For instance, if a company’s historical data shows a preference for candidates from specific schools or backgrounds, the AI might disproportionately favor those groups, filtering out qualified applicants from diverse backgrounds.
  2. Algorithmic Bias: Even when data is neutral, the algorithm’s design can introduce biases. For example, the algorithm might weigh factors like years of experience or education in a way that disproportionately impacts certain groups. One real-world example involved a large tech company’s AI-powered recruitment tool, which unintentionally favored male applicants because it was trained on resumes from predominantly male applicants in technical roles. The algorithm learned to associate terms more commonly used by men with success, thereby disadvantaging female applicants.
  3. Lack of Transparency: AI decision-making can often be a “black box,” meaning the criteria and logic used by the AI may be difficult to understand or interpret. This opacity can make it challenging to identify when and where discrimination is occurring, particularly if the AI evaluates candidates on factors not directly visible to human recruiters. This lack of transparency also makes it harder for candidates to appeal decisions or demonstrate that bias influenced their evaluation.
  4. Unintentional Bias in Data Collection: Sometimes, bias enters the process during data collection. For example, if certain demographics are underrepresented in the data collected, the algorithm might interpret this as a lack of qualifications among these groups rather than a result of underrepresentation. Thus, the AI may score them lower, leading to discriminatory outcomes.

Real-World Examples of AI Discrimination

A new study has demonstrated that these issues are far from hypothetical. The paper, titled “Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval,” was published by the University of Washington. It found that AI hiring tools and large language models have fundamental biases inherent to their training models. The study reviewed over 500 publicly available resumes and job descriptions, and found that AI hiring tools discriminated based on both racialized and gender-based traits. 

According to the paper, these tools significantly favor “White-associated names in 85.1% of cases and female-associated names in only 11.1% of cases, with a minority of cases showing no statistically significant differences. Further analyses show that Black males are disadvantaged in up to 100% of cases, replicating real-world patterns of bias in employment settings.” These examples highlight the need for employers to actively audit AI tools, ensuring they align with anti-discrimination laws and principles.

Legal Framework Addressing AI Discrimination in Hiring

In the U.S., employment laws like the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Equal Employment Opportunity Commission (EEOC) guidelines protect workers from discrimination based on protected classes, including race, gender, age, and disability. Although these laws were not specifically designed with AI in mind, they still apply to AI-driven hiring practices if the technology produces biased results.

Federal Guidance on AI Bias

To address the unique challenges AI brings to employment, the EEOC has provided guidance that extends traditional anti-discrimination standards to AI tools. Employers are required to ensure that AI systems do not create a disparate impact on protected groups unless they can demonstrate that the criteria used are job-related and necessary for business purposes. Employers must also ensure that any AI tools are regularly audited and adjusted to mitigate potential biases.

Additionally, the White House’s proposed AI Bill of Rights and recent efforts by the Federal Trade Commission (FTC) signal a growing focus on regulating AI, particularly in consumer-facing contexts, which may extend to employment decisions.

California’s Pioneering Regulations

California has taken a proactive stance on regulating AI in employment through the California Civil Rights Division (CRD). Recent proposed rules on automated decision systems seek to ensure that AI in hiring is used responsibly and fairly. The regulations clarify key terms like “automated decision system” and “agent” to specify that employers and third-party vendors using AI to evaluate applicants are accountable under the law. These rules highlight that employers must provide transparency, perform anti-bias testing, and adhere to non-discrimination standards when using AI-driven hiring tools.

Can AI Discriminate Against Potential Employees?

AI in hiring has enormous potential to improve efficiency, reduce human biases, and increase opportunities for qualified applicants. However, AI can also discriminate if not carefully monitored and designed. If you believe you have faced discrimination due to AI used in the hiring process, you may have grounds for an employment discrimination claim. Learn more about your options for hiring discrimination cases by scheduling your consultation with the experienced attorneys at the Law Offices of Todd M. Friedman, P.C.

This is attorney advertising. These posts are written on behalf of Law Offices of Todd M. Friedman, P.C. and are intended solely as informational content. These blogs in no way provide specific or actionable legal advice, nor does your use of or engagement with this site establish any attorney-client relationship. Please read the disclaimer