White Paper
Navigating TRAIGA: The Business Leader's Guide to Texas AI Compliance & the 90th Legislative Session
A strategic framework for understanding the Texas Responsible AI Governance Act, the DIR Regulatory Sandbox, and what's coming in 2027.
By James Dickey | Published February 2026 | Updated February 2026
Executive Summary
2026 marks the year Texas established a formal AI governance framework. The Texas Responsible AI Governance Act (TRAIGA), introduced as HB 149 and SB 1964 during the 89th Legislature, creates new compliance obligations for companies developing or deploying AI systems in Texas. With the 90th Legislature approaching in 2027, the regulatory landscape is accelerating.
This guide provides business leaders with a practical framework for understanding TRAIGA's requirements, evaluating DIR Sandbox participation, preparing for AG enforcement, and positioning ahead of the 90th Legislature's expanded AI agenda.
Breaking Down TRAIGA (HB 149/SB 1964)
What is TRAIGA?
The Texas Responsible AI Governance Act establishes the first comprehensive state-level AI regulatory framework in Texas. Introduced during the 89th Legislature as HB 149 (House) and SB 1964 (Senate), TRAIGA creates a classification system for AI systems, defines compliance obligations for different participants in the AI ecosystem, and establishes enforcement mechanisms through the Attorney General's office.
Who does TRAIGA apply to?
TRAIGA distinguishes between two categories of regulated entities, each with different obligations:
AI Developers
Companies that create or substantially modify AI systems
- → Provide transparency documentation
- → Disclose training data practices
- → Document known limitations
- → Maintain technical documentation
AI Deployers
Companies using AI in consequential decision-making
- → Conduct impact assessments for high-risk uses
- → Maintain human oversight mechanisms
- → Provide consumer disclosure when AI is used
- → Report adverse impacts
What is the intent standard?
TRAIGA applies an “intent standard” that focuses on whether an AI system is designed for or reasonably foreseen to make consequential decisions affecting individuals. This means companies cannot avoid compliance simply by disclaiming that their AI tools are “advisory only” if those tools are predictably used for consequential determinations in areas like employment, lending, insurance, or housing.
What are the disclosure requirements?
Both developers and deployers face disclosure obligations, though the requirements differ. Developers must provide documentation about system capabilities, limitations, and appropriate use cases. Deployers must inform individuals when AI systems are used in consequential decisions and provide mechanisms for human review of automated determinations. These disclosure requirements apply regardless of whether the AI system qualifies as “high-risk.”
The Texas AI Regulatory Sandbox
One of TRAIGA's most significant provisions is the creation of an AI Regulatory Sandbox administered by the Texas Department of Information Resources (DIR). This program provides qualified companies with a structured path to innovate while demonstrating responsible AI deployment.
Sandbox Key Features
- ◆36-month safe harbor from certain TRAIGA enforcement provisions during active participation
- ◆Structured application process through DIR with defined eligibility criteria and review timelines
- ◆Quarterly reporting on testing activities, consumer impacts, and compliance progress
- ◆Graduated exit pathway into full compliance, reducing transition risk for innovative companies
The sandbox is particularly relevant for companies developing novel AI applications in healthcare, financial services, education, or government services. Participation provides both regulatory breathing room and a demonstrable record of responsible development practices.
Strategic consideration: Companies that participate in the sandbox establish compliance credibility that may prove valuable as TRAIGA's enforcement mechanisms mature. Early sandbox participants will have established relationships with DIR and demonstrated compliance track records before the broader enforcement regime takes full effect.
Enforcement & the 60-Day Cure Period
TRAIGA's enforcement framework balances accountability with pragmatism. The Texas Attorney General holds primary enforcement authority, but the statute incorporates several mechanisms designed to encourage voluntary compliance before punitive action.
How does the 60-day cure period work?
Upon identification of a violation, the AG must provide written notice specifying the nature of the violation and the required remediation. The company then has 60 days to cure the violation before enforcement proceedings may commence. This cure period applies to first violations and creates a meaningful incentive for companies to build remediation capabilities into their compliance programs.
Enforcement Considerations
| Factor | Impact on Enforcement |
|---|---|
| NIST Framework Alignment | Favorable consideration in enforcement discretion |
| Sandbox Participation | Safe harbor during active enrollment |
| High-Risk Classification | Enhanced obligations and scrutiny |
| Business Size | Fine scaling reflects company resources |
| Cure Period Compliance | Successful cure within 60 days resolves first violations |
Cure strategy: The most effective approach is building a cure-capable compliance infrastructure before any enforcement action occurs. This means maintaining documentation, audit trails, and remediation playbooks that can demonstrate good-faith compliance within the 60-day window.
90th Legislature Outlook (2027)
Current Outlook — February 2026. This section reflects the anticipated legislative landscape and will be updated as the 90th session approaches.
The 90th Texas Legislature (convening January 2027) is expected to expand on the AI governance framework established by TRAIGA. Based on interim committee activity, stakeholder engagement, and national policy trends, several areas are likely to receive legislative attention:
Grid Resilience & AI Data Centers
Expect expanded ERCOT requirements for AI-driven data centers, including enhanced demand response obligations, water usage reporting standards, and potential power purchase agreement transparency requirements. The intersection of grid reliability and AI compute demand is a top priority for both the Senate and House.
Public Sector AI Standards
State agency use of AI is likely to receive formal procurement standards and transparency requirements. DIR's role may expand from sandbox administration to broader AI oversight for government applications, particularly in criminal justice, benefits administration, and licensing decisions.
Deepfake Fraud Prevention
The 90th Legislature is expected to address AI-generated content in elections, financial fraud, and identity theft. Bills targeting synthetic media disclosure, deepfake-enabled fraud, and AI-generated impersonation are anticipated from both chambers.
TRAIGA Amendments
Based on early enforcement experience and stakeholder feedback, amendments to TRAIGA's classification system, cure period provisions, or penalty structures are possible. Companies actively engaged in compliance will have standing to influence these amendments constructively.
From Defense to Offense: The JD Key Approach
Most firms approach AI regulation as a compliance burden to minimize. JD Key Consulting helps clients transform TRAIGA obligations into strategic positioning that creates competitive advantage.
NIST Alignment
Aligning operations with the NIST AI Risk Management Framework provides both compliance defense and market credibility. Companies that demonstrate NIST alignment receive favorable consideration in TRAIGA enforcement.
Transparency Audits
Proactive transparency reporting establishes a compliance track record before enforcement actions occur. This creates both a legal defense and a marketing differentiator in AI-conscious markets.
Sandbox Evaluation
The DIR Sandbox program offers a unique opportunity to innovate with regulatory cover. We help clients evaluate whether sandbox participation aligns with their product roadmap and competitive strategy.
James Dickey brings a unique perspective to AI governance strategy: deep political infrastructure experience combined with regulated industry expertise. The same relationships and strategic thinking that registered 223,000 voters and recruited 3,862 candidates now help technology companies navigate the intersection of innovation and regulation.
Frequently Asked Questions
What is TRAIGA and the Texas AI Mandate of 2026?
TRAIGA (Texas Responsible AI Governance Act) refers to legislation introduced during the 89th Texas Legislature as HB 149 and SB 1964. It establishes compliance requirements for businesses developing or deploying AI systems in Texas, including disclosure obligations, risk assessments for high-risk systems, and a regulatory sandbox administered by the Department of Information Resources (DIR).
Who enforces AI compliance in Texas under TRAIGA?
The Texas Attorney General has primary enforcement authority under TRAIGA. Businesses receive a 60-day cure period after notice of a violation before enforcement actions proceed. The Department of Information Resources (DIR) administers the AI Regulatory Sandbox program, and the Texas AI Council advises on policy implementation and standards.
What is the difference between an AI developer and deployer under TRAIGA?
Under TRAIGA, a 'developer' creates or substantially modifies an AI system, while a 'deployer' uses an AI system in a consequential decision-making context. Each role carries different compliance obligations. Developers must provide transparency documentation, while deployers must conduct impact assessments for high-risk applications and maintain human oversight mechanisms.
What is the Texas DIR Regulatory Sandbox for AI?
The DIR Regulatory Sandbox provides a 36-month safe harbor for qualifying AI companies to test innovative products in a controlled environment with reduced regulatory burden. Participants must submit quarterly reports on testing activities, consumer impacts, and compliance progress. The sandbox program allows companies to demonstrate responsible AI deployment before facing full regulatory requirements.
What AI penalties and fines does TRAIGA establish?
TRAIGA's enforcement framework includes civil penalties administered by the Texas Attorney General. The statute provides a 60-day cure period following notice of violation, giving businesses the opportunity to remediate issues before penalties apply. Fine structures scale based on violation severity, business size, and whether the violation involved a high-risk AI system. Companies demonstrating NIST AI Risk Management Framework alignment may receive favorable consideration.
How should businesses prepare for the 90th Texas Legislature on AI policy?
The 90th Texas Legislature (2027) is expected to consider expanded AI governance including grid resilience requirements for AI data centers, public sector AI procurement standards, deepfake fraud prevention measures, and potential amendments to TRAIGA's enforcement mechanisms. Businesses should proactively align operations with NIST AI Risk Management Framework standards, evaluate DIR Sandbox participation, and engage with the Texas AI Council on rulemaking.
Does JD Key Consulting help with TRAIGA compliance?
Yes. JD Key Consulting provides strategic advisory on TRAIGA compliance, including developer and deployer classification, disclosure requirements, DIR Regulatory Sandbox applications, and AG enforcement preparation. James Dickey helps clients transform compliance obligations into competitive positioning through proactive strategy aligned with NIST standards.
Navigate TRAIGA with Confidence
Schedule a confidential consultation with James Dickey to discuss your AI compliance strategy.