State of California to regulate use of AI in employment

More states are enacting laws to regulate the use of Artificial Intelligence (AI) in Employment. Ber...



More states are enacting laws to regulate the use of Artificial Intelligence (AI) in Employment. Berkshire recently reported on a new law taking effect in Texas, and now California is setting regulations in place for employers using AI in employment decisions.

Effective October 1, 2025, significant revisions to Title 2 of the California Code of Regulations goes into effect. The regulations manage the enactment of the state’s Fair Employment and Housing Act (FEHA) and the prohibitions against discrimination in recruitment, hiring, promotion, training, and terminations. The revisions clarify how the state’s existing nondiscrimination requirements apply to AI-driven automated decision-making systems (ADS).

Some key definitions included in the revisions:

  • Automated-Decision System - a computational process that makes a decision or facilitates human decision making regarding an employment benefit. An Automated-Decision System may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.
  • Algorithm - a set of rules or instructions a computer follows to perform calculations or other problem-solving operations.
  • Artificial Intelligence - A machine-based system that infers, from the input it receives, how to generate outputs. Outputs can include predictions, content, recommendations, or decisions.
  • Automated-Decision System Data - Any data used in or resulting from the application of an automated-decision system or any data used to develop or customize an automated-decision system.
  • Machine Learning - The ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.

California’s broad definition of ADS means that many aspects of the employment selection process are covered. For example, tools used for resume screening, performance evaluations, and productivity monitoring are covered. If automated decision making is

used in your organization’s employment selection process, review should be taken to ensure that such use is done without possible discrimination of a protected class of applicant or employee.

Notably, Title 2 applies to employers with operations in California and regularly employs just five or more employees. Moreover, out-of-state employees count toward that threshold. Accordingly, most employers with any operations in the state are likely covered. Their out-of-state employees will only be protected, however, if the alleged conduct either occurred in the state, or the decisions were made by decision makers in the state.

Recordkeeping

Employers covered by California’s Title 2 are required to monitor their employment selection processes, including AI tools, for potential discrimination. That means creating and preserving a host of data and information.

With regard to ADS features, that includes preserving dataset descriptors, scoring outputs, and audit findings. That last item is key. The regulations strongly encourage California employers to conduct “bias audits” of their ADS features. In discrimination cases, the quality, scope, recency, results, and employers’ responses to bias tests can be taken into consideration by agencies and courts. More to the point, the absence of such evidence can weigh against employers, so while not explicitly required, robust monitoring is a practical requirement for California employers.

The recordkeeping requirements extend to soliciting sex and race from both applicants and employees. However, that information cannot be stored along with the rest of the regular “personnel file,” and must be kept away from decision-makers (including automated tools).

Notably, the revisions to Title 2 expand the recordkeeping period from two years to four.

Also note that California’s Title 2 expressly includes both disparate treatment discrimination and disparate impact. So, while the federal government may deprioritize disparate impact theory in federal cases, that is still a viable theory of legal liability and states like California are still enforcing it.

What should employers do now?

State and local jurisdictions around the country are filling the void at the federal level with their own controls on AI and automated tools in employment. While the particulars will vary, they all involve some level of monitoring and human oversight. More specifically, they require employers to know more about how their automated systems work and are generally aimed at pulling employers—the users of AI and related tools—into the sphere of liability if and when the tools they use (most often tools provided by third parties) cause harm.

Accordingly, employers need to educate themselves regarding the tools they use, how they actually work, and what their vendors do, and don’t do, to monitor potential discriminatory effects. To comply with laws such as California’s Title 2, employers will need to either demand more of their vendors, or implement robust monitoring programs of their own, or a combination of the two.

Berkshire Can Help

As AI and related tools become more prevalent and harder to avoid, more and more employers will find themselves at risk. Berkshire’s People Insights team can help you determine what you need to do to protect against potential liability, work with you to appropriately vet third-party AI vendors, and help you design and implement your own monitoring protocols, including establishing an AI governance team. Feel free to reach out to us at bai@berkshireassociates.com to speak to a consultant about your AI policies.

Kristen N. Johnson, MS, MBA, SHRM-SCP and Matt Nusbaum
Kristen N. Johnson, MS, MBA, SHRM-SCP and Matt Nusbaum
Kristen is a Managing HR Consultant with Berkshire, assisting clients of all sizes with compliant development of Affirmative Action Plans and other required government reporting. She also specializes in assisting clients with State reporting and other municipality certifications. Matt has more than nine years of experience as a practicing attorney counseling and representing employers on matters before the OFCCP and other federal, state, and local workplace regulatory and enforcement agencies.

Contact Us

Get in Touch With a Berkshire Expert