Berkshire Blog

Texas Enacts New Law for Employers Using Artificial Intelligence

Written by Brian Marentette, Ph.D | August 5 2025

 

 

On June 22, 2025, Governor Greg Abbott signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA, HB 149) into law. It becomes effective on January 1, 2026, leaving roughly six months to prepare. Below we address four critical questions and offer recommendations to employers seeking to comply with TRAIGA.

 What is considered “Artificial Intelligence” in Texas?

TRAIGA defines an artificial intelligence system as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.” This definition is broader than other early regulations like the NYC LL144 or Colorado SB 24-205, which includes elements that constitute a “high-risk” system that results in algorithmic discrimination (e.g., employment decisions).

 

To whom does the law apply and who enforces TRAIGA?

TRAIGA applies to any entity that: a) Develops or deploys an AI system in Texas, b) Advertises, promotes, or conducts business in the state, or c) Offers products or services used by Texas residents. Only the Texas Attorney General can enforce TRAIGA, there is no private right of action. Employees or consumers of AI tools may submit complaints to the Texas Attorney General. TRAIGA is reasonably employer-friendly, with a 60-day period to cure any violations found by the AG.

 

What AI Practices are prohibited by the law?

As it relates to the employment setting, TRAIGA prohibits developing or deploying an AI system with the intent to discriminate against a protected class under federal or state law. Consistent with what we’ve seen from the Trump Administration and Executive Order 14281, there is a focus on intentional discrimination. Disparate impact, without intent, does not imply discrimination nor violate TRAIGA.

 

What Does TRAIGA NOT Require (for Private Employers)?

The law does not require disclosure to job applicants or employees regarding AI use. Only state agencies (and healthcare providers, in treatment contexts) must disclose AI usage. There are no mandated AI bias assessments like in Colorado or New York City, even if systems affect hiring, performance, or promotion.

 

Recommendations for Employers  

If your ATS, resume-screener, or candidate-matching AI tool could inadvertently display intent to discriminate against protected groups, you may be at risk even without disparate impact alone.  

 

Employers should audit their AI systems and the use of those systems to make decisions to confirm the tools do not intentionally discriminate against applicants or employees. Implementing AI policies and trainings about appropriate AI use can help to mitigate the risk of intentional discrimination. Employers should also ask their AI vendors to confirm that any tools do not intentionally discriminate against a protected group. Although the Texas law does not cover disparate impact discrimination, employers should still monitor AI tools for adverse effects against a protected group and ensure all tools are job-related.  

Compliance hinges on establishing internal governance: AI oversight teams, documentation of data use and outputs, testing, and mitigation steps for prohibited intents. We recommend that anyone using AI tools for making employment decisions establish a governance team, which would include, among others, an Industrial/Organizational Psychologist who can evaluate things like the adverse effects, intent, and job-relatedness of AI tools. Reach out to Berkshire’s People Insights team if you would like to learn more about our AI Governance services.