EEOC Guidance: Algorithmic Decision-Making and Title VII

The EEOC recently released Technical Guidance on the use of automated tools in various HR-related ta...



Posted by Nick Setser on June 1 2023

The EEOC recently released Technical Guidance on the use of automated tools in various HR-related tasks. The document focuses on the application of Title VII of the Civil Rights Act of 1964, which prohibits employment discrimination on the basis of race, color, religion, sex, or national origin, to these algorithmic decision-making tools. The EEOC’s guidance, in the form of a series of questions and answers, clarifies that AI-based selection tools must be monitored by employers with the same scrutiny as traditional selection tools, including by complying with the principles set out in the Uniform Guidelines on Employee Selection Procedures.

AI-based tools have become an option in just about every aspect of the HR & Selection process, from recruiting to performance, and employers are using these tools more than ever. Oftentimes, employers aren’t aware of what exactly these tools are measuring, a blind spot from which Title VII concerns can come into play. In question 1 and 2, EEOC makes clear that if employers are using AI-based tools to make or inform employment decisions, these are ‘selection procedures’ covered under Title VII and therefore must be assessed for potential adverse impact. The technical assistance also specifies that the onus for any discriminatory impact of tools purchased from vendors is on the employers that use them: “[I]f an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor.” The EEOC provides employers with guidance on questions they should ask vendors, along with a reminder that employers should consider “whether there are alternatives that may meet the employer’s needs and have less of a disparate impact” including any less discriminatory algorithms that were considered during the development process.

This technical assistance comes on the heels of a similar but more detailed document from SIOP, the NIST’s special publication on managing bias in AI, as well as recent legislation in New York regarding the use of these tools. These tools aren’t going anywhere, but employers are responsible for monitoring any automated or algorithmic processes they use to ensure that there isn’t adverse impact against any protected groups. One method to analyze the fairness of any tool is the four-fifths rule, which checks whether the use of a procedure causes a selection rate for individuals in a protected group to be substantially less than that of another protected group (described as instances where the ratio of selection rates is less than 80%). However, it is important to keep in mind that this ‘rule’ is a rule of thumb. In order to ensure fairness, employers need to be aware of what exactly is being measured with any selection tool, whether and how that measure is tied to successfully performing the job, and whether performance is being fairly and accurately assessed. An AI-based tool is subject to the same standards as a traditional selection tool, and employers need to understand and monitor these tools to maintain compliance and equity within their organization.

Nick Setser
Nick Setser, M.A., Industrial & Organizational Psychology, is a Compensation Services Consultant at Berkshire Associates Inc. Nick regularly advises clients on compensation best practices, offering practical guidance on how to navigate OFCCP risk, market analysis, and pay equity.

Contact Us

Get in Touch With a Berkshire Expert