As employers across the country vie for talent in the job market, some companies are looking to Artificial Intelligence (AI) technology to give them a competitive advantage in hiring. In the recruitment space, AI is rapidly being applied to help companies review applications, evaluate candidate skills, and assist with candidate selection. But as this becomes more commonplace, the use of AI has raised concerns about oversight and discrimination. EEOC and OFCCP have both spoken about their interest in AI in the employment life cycle. Why are EEOC and OFCCP so interested in AI, and what are the concerns?
Let’s first take a step back and get some clarity on application software, algorithms, and artificial intelligence.
Application Software, often called “apps” (applications) usually refers to the instructions, data or programs used to operate computers and execute specific tasks. Employment application software can include the following: hiring software, video interviewing software, employee monitoring software, automatic resume-screening software, chatbot software for hiring and workflow, analytics software, and worker management software.
An algorithm is the set of instructions or rules (a step-by-step process) for solving a problem or accomplishing a specific task. Algorithms are used in human resources application software to assist employers with data processing so they can evaluate, measure, and make other employment-related decisions concerning employees and applicants. These decision-making tools can be used for hiring, promotion, termination, and performance evaluation.
Congress defined artificial intelligence, or “AI,” in the National Artificial Intelligence Initiative Act of 2020 at section 5002(3) as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” AI can be used by employers and software vendors to develop algorithms that can assist in evaluating, ranking, and in making other decisions regarding employees and applicants.
The implications of AI have drawn increased scrutiny from the EEOC and OFCCP, as well as the HR industry at-large, as more and more employers fold this technology into their HR processes.
In fact, in 2021, the EEOC established an agency-wide AI and Algorithmic Fairness Initiative to examine the use of this emerging technology and ensure that it complies with the federal civil rights the EEOC enforces.
AI and the ADA: What Employers and Applicants Should Know
As part of its AI Fairness initiative, EEOC released guidance in May 2022 diving into the ADA and the use of AI in the recruitment process. Employers should familiarize themselves with the ADA regulations, so they do not violate them when using algorithmic decision-making tools. The guidance points out the following major areas of concern:
Reasonable accommodations – ability to offer reasonable accommodations to job applicants or employees, so they are accurately and fairly rated by the algorithm. Does the process communicate to employees and job applicants that reasonable accommodations are available, let them know the steps that are included, ask whether an accommodation is needed, and recognize that an employee or job applicant does not need to ask specifically for a reasonable accommodation for an accommodation to be offered? Employers should also be aware that when using a vendor to administer any algorithmic decision-making tools, they, the employer, are responsible for the actions of the vendor.
Screening out – using decision-making algorithmic tools that, intentionally or unintentionally, screen out individuals with a disability who could do the job if a reasonable accommodation was provided. An individual could be screened out if their disability causes a lower test score or result in an assessment that would be viewed as unacceptable by the employer and results in the individual losing the job opportunity. For example, software that analyzes a job applicants’ speech patterns to determine problem solving abilities could reject an individual with a speech impediment or rate them with a lower or unacceptable score. Another problematic outcome would be if the algorithmic decision-making tool isn’t measuring what it is intended to measure.
Disability-related inquiries/medical examinations – using decision-making algorithmic tools that ask job applicants or employees questions that could likely result in an applicant or employee revealing they have a disability prior to making an offer of employment. Any questions that are asked that would elicit information, directly or indirectly, as to whether the applicant or employee is an individual with a disability (physical or mental impairments or health) qualifies as a “medical examination.”
This technical assistance guidance is just one component of an ongoing effort by the EEOC to educate employers, employees, and other stakeholders about the application of EEO laws as they relate to AI in HR processes.
Recent Developments in AI Oversight
Berkshire has also been on the ground floor of this rapidly evolving conversation. In fact, in late December of 2022, Berkshire’s Lynn Clements Esq. joined an interdisciplinary Artificial Intelligence Technical Advisory Committee (TAC) convened by the Institute for Workplace Equality. The TAC was chaired by Resolution Economics partner Victoria A. Lipnic, Esq., former Commissioner and former Acting Chair of the EEOC, and included 40 subject matter experts, including Clements and ResEcon’s Human Capital Strategy Artificial Intelligence Audit team, including Lipnic, Gurkan Ay, Ph.D., Margo Pave, Esq., and Ye Zhang, Ph.D.
The report examines the key Equal Employment Opportunity (“EEO”) and Diversity, Equity, Inclusion & Accessibility (DEI&A) issues that employers need to understand and address when using AI-enabled employment tools.
The report summary highlights 17 key findings, which are outlined in this blog post.
In November 2022, the OFCCP proposed to collect information about an employer’s recruitment and screening practices, including the use of AI, as part of routine compliance reviews, or audits. Specifically, contractors would have to submit to OFCCP:
“[D]ocumentation of policies and practices regarding all employment recruiting, screening, and hiring mechanisms, including the use of artificial intelligence, algorithms, automated systems or other technology-based selection procedures.”
Although these changes are not yet finalized, the proposal reflects the agency’s increased interest in employers’ use of technology to make employment decisions. In fact, the agency has already been asking about an employer’s use of artificial intelligence during some current audits, especially if the agency’s preliminary analyses of the contractor’s applicant and hire data reveals significant differences in selection rates by race or gender.
Then in January of 2023, the EEOC held a public hearing on the potential benefits and harms of AI in employment decisions.
“The use and complexity of technology in employment decisions is increasing over time,” said EEOC Chair Charlotte A. Burrows during the hearing. “The goals of this hearing were to both educate a broader audience about the civil rights implications of the use of these technologies and to identify next steps that the Commission can take to prevent and eliminate unlawful bias in employers’ use of these automated technologies. We will continue to educate employers, workers and other stakeholders on the potential for unlawful bias so that these systems do not become high-tech pathways to discrimination.”
Most recently, four federal agencies the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau, the Department of Justice, and the EEOC) weighed in with a joint statement on AI in late April of 2023.
“Today, our agencies reiterate our resolve to monitor the development and use of automated systems and promote responsible innovation. We also pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies,” the statement concluded.
Berkshire and ResEcon are carefully monitoring ongoing discussions around the governance of AI; and we are uniquely positioned to help our clients understand where the industry is headed, what risks are associated with AI-driven processes, and how companies can navigate this rapidly-evolving technology in the HR space while mitigating risk.
For more information on how we can help you with AI, feel free to reach out to: email@example.com.