Artificial Intelligence is currently a hot topic in the HR world as experts debate how to harness its potential for increasingly effective employment decision-making while ensuring that it is unbiased, reliable, and valid predictor of performance. While this debate is ongoing, many employers are pushing forward with these tools, oftentimes without knowing exactly how they work or how to mitigate the risks and biases these tools may have. The debate is also a focus for lawmakers and agencies across the country, with New York requiring that AI-based HR tools be audited for bias and the EEOC announcing in 2021 an initiative to ensure that AI tools used in employment decisions comply with federal anti-discrimination laws.
Current Legal Landscape:
- In California, employees and job applicants have the right to access any personal data that an employer or prospective employer collects via automated tools. This includes the right to request deletion of information and make corrections.
- Starting April 15 2023, New York City’s Local Law 144 will go into effect which gives the Department of Consumer and Worker Protection (DCWP) the right to audit employer use of automated employment decision tools (AEDT) for bias.
- Illinois regulates the use of video interviews with a requirement of consent from candidates, giving candidates the right to request deletion of video interviews, and requiring that employers provide advance notice to candidates about the use of AI.
- Maryland requires that employers utilizing facial recognition tools obtain consent from applicants prior to their use.
So, for employers already utilizing these tools, where can they turn for guidance on the subject? There have been numerous recent reports that provide employers guidance in this area, Including the report issued by the Artificial Technical Advisory Committee of the Institute for Workplace Equality, chaired by ResEcon’s Vicki Lipnic. This January, the Society for Industrial and Organizational Psychology (SIOP) released their 36-page recommendations for AI-based assessments, titled ‘Considerations and Recommendations for the Validation and Use of AI-based Assessments for Employee Selection’. The recent SIOP guidance comes from a task force the organization launched in 2021 in response to the growth of AI in employment decisions and attempts to provide scientifically based best practices on the subject.
There are 5 sections in the SIOP guidelines, which read as follows:
- Section 1. AI-Based Assessments Should Produce Scores that Predict Future Job Performance or Other Relevant Outcomes Accurately
- Section 2. AI-Based Assessments Should Produce Consistent Scores that Reflect Job-Related Characteristics (e.g., upon re-assessment)
- Section 3. AI-Based Assessments Should Produce Scores that are Considered Fair and Unbiased
- Section 4. Operational Considerations and Appropriate Use of AI-Based Assessments for Hiring
- Section 5. All Steps and Decisions Relating to the Development and Scoring of AI-Based Assessments Should be Documented for Verification and Auditing.
One of the important themes that the document emphasizes is that AI-based assessments should meet the same requirements to which other employment assessments have been held for years. However, the unique nature of these tools will sometimes require a different methodology for evaluating these standards. One emphasis is the importance of defining the construct being assessed and its relevance to job performance. This requires a thorough job analysis to identify the critical knowledge, skills, abilities, and other characteristics (KSAOs) required for the job, and a clear understanding of how the AI-based assessment will measure these KSAOs. Another important recommendation is the collection of a diverse and representative sample of data for the development and validation of the AI-based assessment. This includes data from different subgroups of job applicants to ensure that the assessment is fair and does not have adverse impact on protected groups. Additionally, SIOP recommends ongoing monitoring and updating of the assessment to ensure continued validity, fairness, and effectiveness.
The document highlights the importance of communication with job applicants and other stakeholders about the purpose and use of the AI-based assessment, as well as its limitations and potential biases. This includes providing clear and transparent information about how the assessment works and what it measures and being upfront about potential limitations and biases. Additionally, SIOP recommends that organizations consult with experts in industrial-organizational psychology, data science, labor economics, and legal compliance when developing and implementing AI-based assessments for employee selection.
Artificial Intelligence is changing the world and many of these changes are exciting. Employers are excited about the potential to make better predictions and assessments in selection processes and grow the human capital of their organizations. However, as with all employment decisions, fairness and compliance must be huge factors when considering these tools. The SIOP recommendations aim to provide employers with a best practice guidebook in utilizing these tools properly.