Key Takeaways from the November 2025 Webinar
Artificial Intelligence (AI) is rapidly transforming every stage of the employment cycle —from recruiting and selection assessments to performance management and workplace safety and employee monitoring. In a recent expert panel webinar, Berkshire brought together a lawyer, a labor economist, and an industrial organization psychologist to discuss practical strategies to ensure defensibility and compliance as employers navigate the evolving regulatory landscape for AI tools in HR.
Below, we summarize the core discussion points and share actionable recommendations for employers considering or currently implementing AI in their employment processes.
Foundational Legal Considerations
Existing Federal Laws Apply: AI tools used in employment decisions are covered by long-standing federal anti-discrimination statutes (Title VII, ADEA, ADA), not just new state/local regulations. The risk of litigation is real, with several cases already moving through the courts.
State & Local Patchwork: States like California, Colorado, New York, Illinois, Maryland, and Texas have enacted or amended laws targeting AI in employment. Requirements vary but commonly involve transparency, regular bias audits, and, in some cases, reporting results to regulators.
Due Diligence is Critical: Employers must ensure that both the AI tool and its vendor are reliable partners by conducting thorough due diligence, regardless of jurisdiction.
Best Practices Before Adopting AI Tools
Start with the Problem, Not the Technology: Define your organizational need for AI. Don’t adopt AI just for the sake of it—ensure there’s a clear ROI, such as addressing high turnover or improving efficiency in large applicant pools.
Test and Validate: Pilot the tool with your data to ensure it actually delivers on its promises (e.g., reducing turnover, improving selection accuracy) before full implementation.
Vendor Transparency: Ensure you can access the data and methodology necessary for validation and bias assessment. Lack of transparency from a vendor is a red flag.
Bias Audits and Ongoing Monitoring
Annual Bias Audits as a Baseline: Jurisdictions such as New York City and Colorado require annual bias audits. Best practice is to conduct audits not only annually but also after significant changes—such as geographic expansion, job category expansion, outreach initiatives, or algorithm updates.
Whose Responsibility? Regular audits triggered by law often fall to the legal department, but data changes or process expansions need to be flagged by internal teams. For some regulations (e.g., NYC), external independent auditors are required.
AI Governance Framework
Interdisciplinary Teams: Establish a cross-functional governance team that includes legal, HR, data scientists, economists, IT, and I/O psychology expertise.
Lifecycle Management: Key steps include vendor vetting, pilot testing, training, implementation, ongoing monitoring, and eventual sunsetting of tools.
Governance Processes: Evaluate vendors thoroughly, secure pilot phases within contracts, ensure transparency in tool operations, and maintain robust recordkeeping.
Managing Third-Party AI Vendors
Be Cautious with “Fairness” Claims: Some vendors “recalibrate” tools to avoid adverse impact, which can risk disparate treatment under federal law if not handled properly.
Validation is Context-Specific: Validation studies must be relevant to your specific use case and employee population. Be skeptical of vague “EEOC certified” or “OFCCP approved” claims—such certifications do not actually exist.
Metrics for Evaluating AI Fairness
Impact Ratios and Selection Rates: Calculating selection rates and impact ratios for different groups is a regulatory requirement in some jurisdictions (e.g., New York City).
Advanced Statistical Techniques: Go beyond basic ratios by conducting requisition/job-level analyses using Fisher’s Exact, chi-square, regression, Cochran-Mantel-Haenszel (CMH) test, etc., to truly understand if and where bias occurs.
Qualitative Assessments Matter: Don’t forget to review the qualitative aspects, such as the representativeness of training data and appropriateness of the tool’s use case.
What If Bias Is Found?
Burden Shifts to Employer: If a tool results in a disparity for a protected group, employers must demonstrate that the tool is job-related and consistent with business necessity, per the Uniform Guidelines on Employee Selection Procedures (UGESP).
Validation Approaches: Three types: content validation (job-task alignment), criterion-related validation (statistical relationship with job performance), and construct validation (measuring relevant job attributes).
Tools Are Validatable: Ensure your validation approach is tailored to the AI tool’s specific function and your organization’s context.
Final Recommendations
Leverage Established Frameworks: Many traditional risk mitigation and validation methods for pre-employment assessments apply to AI tools.
Set Tools Up for Success: Feed tools accurate, up-to-date, and job-relevant data; vet your training data rigorously.
Monitor Use Creep: Avoid expanding AI tool use beyond its validated scope without reassessment.
Continuous Evaluation: Build processes to re-examine tools as jobs, markets, and regulations evolve.
Conclusion
AI in HR is here to stay, but its responsible use demands careful planning, robust validation, and ongoing monitoring. By following established frameworks and best practices, employers can harness AI’s benefits while managing regulatory and litigation risks.
Need help evaluating or auditing your HR AI tools? Reach out to Berkshire and our partners at Resolution Economics for expert guidance on compliance, validation, and defensibility.
-1.png)
.png?width=593&name=MicrosoftTeams-image%20(4).png)