Texas Passes Major AI Statute: Texas Responsible Artificial Intelligence Governance Act

Michelle Ma
July 25, 2025

AI Talk

As I discussed in my last post, the federal moratorium on state-level AI regulations bit the dust, never making it into the Big Beautiful Bill. This leaves AI legislation squarely back in the hands of the states, and recently, Texas passed its first major AI statute, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). In today’s post, I discuss the general principles the Act covers and noteworthy areas.

Key Takeaways

  1. TRAIGA is effective January 1, 2026, before the Colorado AI Act (which was passed earlier this year)
  2. Broad applicability. Includes in- and out-of-state businesses that sell to Texas residents as well as developers or deployers of AI systems in Texas who sell outside the state. The laws apply to any person who “(1) promotes, advertises, or conducts business in [Texas]; (2) produces a product or service used by residents of this state; or (3) develops or deploys an artificial intelligence system in this state.”
  3. Explicitly protects civil rights by prohibiting government agency use in certain areas
  4. Creates a compliance roadmap
  5. Empowers the Texas attorney general with strong enforcement powers, with no private right of action
  6. Creates new regulatory mechanisms, such as the AI sandbox and council

Compliance Roadmap

TRAIGA doesn’t impose governance and accountability measures in the same manner as the EU AI Act. However, it does implicitly provide a roadmap to satisfy its new governance standard, through civil investigative demands (CID). The law states that when the state AG’s office receives an individual complaint regarding a TRAIGA violation (which can be made online), it can choose to issue a CID, requesting certain information from developers and deployers: 

  • Description of the purpose, intended use, deployment context, associated benefits of the AI system
  • Description of categories of data the AI system processes as inputs and outputs produced
  • Metrics used to evaluate the performance and known limitations of the AI system
  • Description of the post-deployment monitoring and user safeguards provided (such as oversight, use, learning process established to address issues)
  • Summary of the type of data used to program or train the AI system

All this to say, that it would be wise for businesses to have robust recordkeeping practices documenting all these areas, in the event they receive a CID. 

Civil Rights Protection

Notably, one of TRAIGA’s main aims is to protect the infringement of individual civil rights from government entities using AI systems, in these prohibited areas: 

  • Manipulation of human behavior to cause harm or criminal behavior (such as physical self-harm, harming another person, or engaging in criminal activity)
  • Manipulation of human behavior to circumvent informed decision-making, which includes deceptive trade practices.
  • Social scoring by a government entity for evaluation or classifying individuals based on social behavior or personality characteristics, in certain contexts.
  • Government entities developing or deploying an AI system using biometric identifiers, except with individual consent.
  • Political viewpoint discrimination or infringement on freedom of association or free speech, to the extent the viewpoints are not illegal or obscene.
  • Restricts AI use for pornographic or sexual contexts, including the exploitation of children.

And most importantly, TRAIGA prohibits developing or deploying AI systems with the intent to unlawfully discriminate against a protected class, with certain exemptions for financial institutions and insurance companies. 

AI Sandbox Established

To encourage innovation, TRAIGA establishes a sandbox that allows participants to obtain legal protection and limited access to the Texas market to test AI systems, without having to obtain a license, registration or other regulatory authorization. Businesses can apply and once enrolled, they will not face enforcement actions based on its activities in the program. 

AI Council Created

The new law also establishes a council of experts in law, ethics and technology, who will be appointed by the governor, lieutenant governor, and speaker of the Texas House. The council’s mandate is to ensure the AI systems are ethical, developed in the public’s best interest, and don’t harm public safety or undermine individual freedom. Importantly, the council is also tasked with evaluating potential instances of “regulatory capture” which include undue influence by tech companies or disproportionate burdens on smaller innovators caused by the use of the AI systems.