This bill is massive, so I will not cover all its provisions comprehensively. Here, however, is a summary of what the new version of TRAIGA does.
TRAIGA in Brief
The ostensible purpose of TRAIGA is to combat algorithmic discrimination, or the notion that an AI system might discriminate, intentionally or unintentionally, against a consumer based on their race, color, national origin, gender, sex, sexual orientation, pregnancy status, age, disability status, genetic information, citizenship status, veteran status, military service record, and, if you reside in Austin, which has its own protected classes, marital status, source of income, and student status. It also seeks to ensure the “ethical” deployment of AI by creating an exceptionally powerful AI regulator, and by banning certain use cases, such as social scoring, subliminal manipulation by AI, and a few others.
Precisely like SB 1047, TRAIGA accomplishes its goal by imposing “reasonable care” negligence liability. But TRAIGA goes much further. First, unlike SB 1047, TRAIGA’s liability is very broad. SB 1047 created an obligation for developers of AI models that cost over $100 million to exercise “reasonable care” (a common legal term of art) to avoid harms greater than $500 million. TRAIGA requires developers (both foundation model developers and fine-tuners), distributors (cloud service providers, mainly), and deployers (corporate users who are not small businesses) of any AI model regardless of size or cost to exercise “reasonable care” to avoid “algorithmic discrimination” against all of the protected classes listed above. Under long-standing legal precedent, discrimination can be deemed to have occurred regardless of discriminatory intent; in other words, even if you provably did not intend to discriminate, you can still be found to have discriminated so long as there is a negative effect of some kind on any of the above-listed groups. And you can bear liability for these harms.
On top of this, TRAIGA requires developers and deployers to write a variety of lengthy compliance documents—“High-Risk Reports” for developers, “Risk Identification and Management Policies” for developers and deployers, and “Impact Assessments” for deployers. These requirements apply to any AI system that is used, or could conceivably be used, as a “substantial factor” in making a “consequential decision” (I’ll define these terms in a moment, because their definitions have changed since the original version). The Impact Assessments must be performed for every discrete use case, whereas the High-Risk Reports and Risk-Identification and Management Policies apply at the model and firm levels, respectively—meaning that they can cover multiple use cases. However, all of these documents must be updated regularly, including when a “substantial modification” is made to a model. In the case of a frontier language model, such modifications happen almost monthly, so both developers and deployers who use such systems can expect to be writing and updating these compliance documents constantly.
In theory, TRAIGA contains exemptions for open-source AI, but it is weak—bordering on nonsensical: the exemption only applies to open models that are not used as “substantial factors” in “consequential decisions,” but it is not clear how a developer of an open-source language model could possibly prevent their model from being used in “consequential decisions,” given the very nature of open-source software. Furthermore, the bill defines open-source AI differently in different provisions, at one point allowing only models that openly release training data, code, and model weights, and at another point allowing models that release weights and “technical architecture.” If you are an open-source developer, the odds are that every provision, including the liability, applies to you.
On top of this, TRAIGA creates the most powerful AI regulator in America, and therefore among the most powerful in the world: the Texas Artificial Intelligence Council, a new body with the ability to issue binding rules regarding “standards for ethical artificial intelligence development and deployment,” among a great many other things. This is far more powerful than the regulator envisioned by SB 1047, which had only narrow rulemaking authority.
The bill comes out of a multistate policymaker working group convened by the Future of Privacy Forum, a progressive non-profit focused on importing EU-style technology law into the United States. States like California, Connecticut, Colorado, and Virginia have introduced similar regulations; in important ways, they resemble the European Union’s AI Act, with that law’s focus on preemptive regulation of the use of technology by businesses.
All of this is purported by its sponsor, Representative Giovanni Capriglione, a Republican, as a model for “red state” AI legislation—in the months after Donald Trump ran a successful presidential campaign based in part on the idea of broad-based deregulation of the economy. Color me skeptical that Representative Capriglione’s bill matches the current mood of the Republican Party; indeed, I would be hard-pressed to come up with legislation that conflicts more comprehensively with the priorities of the Republican Party as articulated by its leaders. Perhaps you view this as a virtue, perhaps you view it as a sin; I view it as a fact.
All of this has been the thrust of TRAIGA since the beginning. But how has the bill changed since it was previewed in October?
[ed. Wow. Texas (!) takes a mighty swing at AI regulation. Will it be a homer or foul ball? Definitely interesting to see what kind of pushback this bill gets, and what that means for future efforts. Already we can see it being positioned as a "red state" policy position (as if AI were a political football - or, maybe it's just a sales pitch), and the usual scare tactics around stiffling "future innovation". Regardless, a lot of thought went into this, which in itself is encouraging, and a good template for what comes next. Also, a side note - ballpark estimate of the economic stakes involved (PwC):]
***
- What comes through strongly from all the analysis we’ve carried out for this report is just how big a game changer AI is likely to be, and how much value potential is up for grabs. AI could contribute up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined. Of this, $6.6 trillion is likely to come from increased productivity and $9.1 trillion is likely to come from consumption-side effects.
- Labour productivity improvements will drive initial GDP gains as firms seek to "augment" the productivity of their labour force with AI technologies and to automate some tasks and roles.
- Our research also shows that 45% of total economic gains by 2030 will come from product enhancements, stimulating consumer demand. This is because AI will drive greater product variety, with increased personalisation, attractiveness and affordability over time.
- The greatest economic gains from AI will be in China (26% boost to GDP in 2030) and North America (14.5% boost), equivalent to a total of $10.7 trillion and accounting for almost 70% of the global economic impact. ~ Sizing the Prize (PwC)