AI Talk
As I discussed in my last post, the federal moratorium on state-level AI regulations bit the dust, never making it into the Big Beautiful Bill. This leaves AI legislation squarely back in the hands of the states, and recently, Texas passed its first major AI statute, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). In today’s post, I discuss the general principles the Act covers and noteworthy areas.
TRAIGA doesn’t impose governance and accountability measures in the same manner as the EU AI Act. However, it does implicitly provide a roadmap to satisfy its new governance standard, through civil investigative demands (CID). The law states that when the state AG’s office receives an individual complaint regarding a TRAIGA violation (which can be made online), it can choose to issue a CID, requesting certain information from developers and deployers:
All this to say, that it would be wise for businesses to have robust recordkeeping practices documenting all these areas, in the event they receive a CID.
Notably, one of TRAIGA’s main aims is to protect the infringement of individual civil rights from government entities using AI systems, in these prohibited areas:
And most importantly, TRAIGA prohibits developing or deploying AI systems with the intent to unlawfully discriminate against a protected class, with certain exemptions for financial institutions and insurance companies.
To encourage innovation, TRAIGA establishes a sandbox that allows participants to obtain legal protection and limited access to the Texas market to test AI systems, without having to obtain a license, registration or other regulatory authorization. Businesses can apply and once enrolled, they will not face enforcement actions based on its activities in the program.
The new law also establishes a council of experts in law, ethics and technology, who will be appointed by the governor, lieutenant governor, and speaker of the Texas House. The council’s mandate is to ensure the AI systems are ethical, developed in the public’s best interest, and don’t harm public safety or undermine individual freedom. Importantly, the council is also tasked with evaluating potential instances of “regulatory capture” which include undue influence by tech companies or disproportionate burdens on smaller innovators caused by the use of the AI systems.