Just a few short years ago when AI started gaining traction in HR processes, particularly recruitment and selection, many people in HR compliance wondered when we would start to see legal challenges. Fast forward to the present, and we have seen AI-enabled selection tools challenged from multiple angles at once, including employment discrimination statutes, consumer reporting rules, state hiring restrictions, and even advertising/marketing claims. In this blog, we’ll summarize the key cases in the last several years along with the legal frameworks under which the tools have been challenged. At the end you can find a brief list of considerations to help your organization prepare for the variety of legal challenges that may arise.
This matter alleges an AI-enabled hiring workflow can function as an employee selection procedure and may produce discriminatory outcomes for which employers and others could be responsible for under existing non-discrimination laws. A central idea is that the vendor can be treated as an agent of the employer when it materially shapes selection decisions. While the case is still pending and there has been no final decision, the legal theories, if successful, create potential exposure tied to discrimination based on age (ADEA), disability (ADA), and race/sex and other protected categories (Title VII).
HR/TA takeaway: If the tool screens, ranks, or dispositions applicants, it may be treated like a test—meaning the employer may need to be prepared to defend how it works and why it’s job-related. It is to be determined whether an AI vendor can be treated as an agent of the employer and have shared liability with the employer in discrimination. But employers should take steps now to evaluate their current exposure under anti-discrimination laws when using AI-enabled tools in their selection processes.
This matter raises the theory that AI systems that scrape or assemble applicant data can generate outputs that function like a consumer report. If employers use that output for hiring decisions, they may face challenges to the process of using the tool, rather than just the tool itself. This dispute focuses on whether required notices, authorizations, and adverse action procedures were followed under the Fair Credit Reporting Act (FRCA).
HR/TA takeaway: Your risk isn’t only “bias.” It can be process compliance: what data is collected, how it’s used, and whether applicant notices/permissions and required steps occur when the tool influences a decision. This case is ongoing, so stay tuned for future developments.
These complaints challenge claims that tools are “bias free,” tying marketing statements to how the tools actually perform for people with disabilities. They also raise Americans with Disabilities Act (ADA) concerns about tools screening out individuals with disabilities even when the tool doesn’t explicitly ask about disability. Complaints were filed to the FTC based on deceptive trade practices theories.
HR/TA takeaway: “Bias free” messaging can backfire. HR/TA can inherit vendor marketing risk if internal stakeholders repeat those claims or rely on them instead of evidence. Be sure to thoroughly vet any claims of ‘bias free’ and consider conducting an analysis specific to your specific use of the tool and your data to determine the degree of bias that may be present with the tool. A key point is that an AI-enabled tool can work differently for different employers so do your homework when selecting and implementing AI-enabled employment processes. In addition, employers also need to think about compliance holistically – its not just the EEOC that is watching how employers and vendors use AI.
This matter alleged age-based disparate treatment using different age thresholds for females 55+ and males 60+, violating the Age Discrimination in Employment Act (ADEA). It’s an example of a straightforward “rule-out” approach implemented through automated processes. iTutorGroup utilized an algorithm to identify optimal applicants and the resulting tool ended up excluding females aged 55 and males aged 60 and older. iTutorGroup agreed to pay $365,000 and furnish other relief to settle the suit.
HR/TA takeaway: Automation can scale a bad rule instantly. Any knockout questions, auto-disposition rules, or “hard” thresholds should be reviewed as if they were written policy, because functionally, they are. Ensure that the process the AI tool is using to screen applicants is explainable, job-related, and facially neutral.
This matter challenges an AI assessment described as measuring “reliability, honesty, and integrity,” tying it to a Massachusetts state restriction on lie detector tests. The theory is that a tool can be treated like a prohibited credibility/lie-detection mechanism based on what it functionally evaluates, not just how it’s labeled.
HR/TA takeaway: State and local rules can create unique risks. A tool that seems like a general “integrity” screen may trigger a state-law problem depending on how it’s designed and marketed. Ensure you understand what the AI tool is measuring, despite any claims that the tool is not a test or assessment.
Like any other type of employee selection procedure, AI tools can be valuable tools to aid in making employment decisions, but employers should know what the tools are measuring, be able to explain how the tool works, and show that the tool is job-related if it results in disparate outcomes. Below are some key governance steps employers can take to prepare for any legal disputes with their AI tools.
At Berkshire, we regularly assist clients with these types of questions and governance activities. Reach out to us if you have questions or concerns about finding, using, or defending your AI tools.