Legislation regulating the burgeoning artificial intelligence industry (not to mention anybody who deploys an AI system for use in the world) has advanced from the Senate Business & Commerce Committee and will presumably go to the Senate floor in short order. The Senate committee made only a few changes. Most significantly, insurers and federally insured (and regulated) financial institutions got exempted, since they already have to comply with a panoply of antidiscrimination laws and various statutes proscribing unlawful, unfair, or deceptive practices.
We have commented at length on this bill since the original draft circulated right before the session began, so we will not go over that ground again. But it is worth reiterating a couple of points we made at the outset but that were generally ignored as the bill made its way through the process.
First, the bill subjects all businesses and health care providers who use AI to the new duties and penalties established by the bill (except, as we pointed out above, insurance companies and banks). This is true even for heavily regulated industries, such as energy, utilities, and manufacturers, not to mention every employer in the state. All of these are already subject to myriad statutory and regulatory frameworks that govern their business practices and punish anti-discriminatory or deceptive conduct. HB 149’s overlay could expose businesses and health care providers to significant additional liability, intrusive state investigations, and litigation against the state, a party with bottomless resources and the right to attorney’s fees and costs. We have no idea how the insurance market will assess this additional risk or what kind of insurance premiums might be necessary to cover it. The per violation penalty levels are plenty large enough to stack up, and while we are pleased that the bill did not end up creating a private right of action on top of the penalties, this exposure will have to be factored in, as will as a substantial amount of compliance costs.
Second, we think the bill may overexpose Texas businesses for the simple reason that, although the bill purports to apply to any developer or deployer anywhere, if the AI finds its way into Texas, an unknown number of developers (and perhaps deployers as well) will not be amenable to suit in Texas. Texas courts can only exert jurisdiction to the limits of the federal constitution, and we think it likely that many of AI systems that are being and will be used in Texas were placed in the stream of commerce with no specific intention of being used in any particular place. This will be especially true of developers located outside the US who maintain few if any meaningful contacts with Texas. Moreover, if a Texas business or health care provider deploys one of these system and ends up staring down the barrel of an OAG enforcement action, they will be left holding the bag with little chance of recovering anything from a developer who can’t be reached by the long-arm of Texas law. And, since we can be darn sure that AI developers are really smart, we can expect them to design their operations to avoid incurring liability under this bill.
One thing that has been substantially improved since the inception of the bill, however, is the much higher level of specificity regarding the new duties the bill creates. The most general duty remaining in the bill pertains to unlawful discrimination against a protected class in violation of state and federal law. This language marks a big improvement from the far vaguer provision in the original draft. The problem we see here is the definition of “protected class.” The bill defines the term as “a group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability.” We are not precisely sure what “characteristic, quality, belief, or status” means apart from “race, color, national origin, sex, age, religion, or disability.” We are also not certain exactly sure what the scope of “state civil rights laws” might be, since there is now so much social regulation aimed at businesses and health care providers (and more being piled on every session) that anything the OAG doesn’t like might “violate” somebody’s “civil rights” somehow. In other words, this bill could be used to provide excellent cover for a state action against an entity or individual because that entity or individual doesn’t agree with the powers momentarily in charge.
This concern may be overexaggerated, but consider one example. An employer uses an AI system to screen job applicants. Somebody submits a job application listing membership in, say, a group or association whose beliefs, values, or activities are in direct contradiction of those of the prospective employer. Or perhaps the employer simply doesn’t want to run the risk of that person disrupting the workplace by mouthing off or proselytizing the other employees. Under this bill, the use of that system to screen out that particular applicant has just violated his or her “civil” rights, i.e. that applicant’s First Amendment rights of speech or assembly. You get the idea. There are endless permutations of this example that, when discovered by a politically ambitious public official with an axe to grind or the next primary to win, could put the employer in a serious vat of boiling oil. The publicity alone would be ruinous. How can an employer possibly insure that risk? The bill essentially places what constitutes a violation in the eye of the beholder—and that beholder is a statewide-elected official with all the powers necessary to make the employer’s life miserable. And all because the employer simply wanted a workplace to function with a minimum amount of friction and a maximum amount of productivity.
We can hope that none of that ever happens. But that may be a hope against hope. Every time we have handed over to the OAG a regulatory function backed by punitive enforcement authority, that authority has been used, followed by a flurry of press releases trumpeting that fact and taking credit for ridding the world of an evil spirit. It doesn’t matter who occupies the office. The end result is always the same. That’s why we have always tried steer these conversations in the direction of a standard regulatory process with collaborative rulemaking to establish standards clear enough to enable businesses to comply with them. And if they don’t, we already have a well-established administrative process for determining violations and remedying them in a reasonable way. None of that is true under this bill. There will be no rules, just one-off lawsuits aimed at picking off the villain of the week. There will be no development of the law, which is vitally necessary to the stability and predictability of the civil justice system. We will also continue laying off the legislative and executive branches’ responsibility for regulatory matters to the courts, where they simply do not belong.
Be that as it may, HB 149 will pass, so here is the latest snapshot of what’s in it:
- Amends § 503.001, Business & Commerce Code (Capture of Biometric Identifiers) to add a definition of “artificial intelligence.”
- Adds § 503.001(b-1) to provide that an individual has not been informed or provided consent for the capture or storage of a biometric identifier for a commercial purpose based solely on the existence of an image or other media on the internet or other publicly available source (unless the individual already made it publicly available).
- Amends § 503.001(e) to exempt the training, processing, or storage of biometric identifiers involved in AI systems, unless any of the above is performed for the purpose of uniquely identifying an individual.
- Further exempts the development or deployment for an AI model or system for the purpose of (1) preventing, protecting against, detecting, or responding to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or other illegal activity, (2) preserving the integrity or security or a system, or (3) investigating, reporting, or prosecuting a person responsible for a security incident, identity theft, fraud, harassment, malicious or deceptive activities, or other illegal activity.
- Adds § 503.301(f) to provide that if a biometric identifier used to train an AI system is subsequently used for a commercial purpose, the possessor is subject to regulation and penalties under Chapter 503.
- Amends § 541.104(a)(2), Business & Commerce Code (consumer data protection), to require a possessor of a biometric identifier to assist the controller in complying with security requirements for data collected,stored, and processed by an AI system.
- Adds Subtitle D, Title 11, Business & Commerce Code.
- Adds Chapter 551 and applies it to a person who: (1) promotes, advertises, or conducts business in Texas; (2) produces a product or service used by Texas residents; or (3) develops or deploys an AI system in Texas. (Texas courts may not have subject matter jurisdiction over out-of-state individuals or entities with insufficient contacts with the state. This could render the bill at least partially unenforceable.)
- Adds § 551.001 to define “artificial intelligence system,” “consumer,” and “Council” (the Texas Artificial Intelligence Council established by Chapter 554).
- Adds Chapter 552 to regulate AI developers and deployers.
- Adds § 552.001 to define “developer” and “deployer.”
- Adds § 552.002 to provide that the statute may not be construed to impose a requirement that adversely affects a person’s rights or freedoms or to authorize any department or agency other than TDI to regulate the business of insurance. (This provision may violate constitutional separation of powers.)
- Adds § 552.003 to preempt local regulation.
- Adds § § 552.051 to require a governmental agency that makes available an AI system intended to interact with the public to disclose that to a consumer and prescribes requirements for the disclosure.
- Adds § 552.051(f) to require the provider of a health care service that uses an AI system to disclose it to the recipient of the service not later than the date the service was first provided (except in an emergency, where disclosure must be provided as soon as reasonable).
- Adds § 552.052 to prohibit development or deployment of an AI system to incite a person to commit self-harm, harm another, or engage in criminal activity
- Prohibits an AI system to intentionally use deceptive practices under the DTPA;
- Adds § 552.053 to prohibit a government entity from using AI system to assign social scores.
- Adds § 552.054 to prohibit a government entity from developing or deploying AI systems with biometric identifiers and targeted or untargeted gathering o images or media for the purpose of uniquely identifying a specific individual.
- Adds § 552.055 to prohibit an AI system from being deployed or developed (by any person) that intentionally limits political viewpoint expression. Prohibits an interactive computer service from using an AI system to discriminate a user based on political speech (except for illegal hate speech, obscenity, unlawful deep fake images, or in violation of intellectual property rights).
- Adds § 552.006 to prohibit development or deployment of an AI system that unlawfully discriminates against a protected class (does not apply to an insurance entity already subject to antidiscrimination laws or prohibitions on unfair or deceptive practices; provides that a federally insured financial institution is considered as in compliance if it complies with all federal and state banking laws and regulations).
- Adds § 552.057 to prohibit development or deployment of an AI system with the sole intent of producing or distributing unlawful visual material.
- Adds § 552.101 to give the attorney general exclusive enforcement authority and to bar a private right of action.
- Adds § 552.102 to require the OAG to create and maintain an online mechanism to receive consumer complaints.
- Adds § 552.103 to give the attorney general civil investigative authority.
- Adds § 552.104 to provide for notice of violation and opportunity to cure.
- Adds § 552.105 to authorizes civil penalties of not less than $10-12,000 for a curable violation;
- Authorizes an civil penalty of $80-200,000 for an uncurable violation;
- Authorizes an civil penalty of $2-40,000 per day for a continuing violation
- Authorizes the OAG to recover attorney’s fees and costs.
- Creates a rebuttable presumption tht a person used reasonable care as required under the statute.
- Allows a defendant to seek an expedited hearing, including a request for a declaratory judgment, on good faith belief that no violation has occurred.
- Shields a defendant from liability if: (1) another person uses the AI system for a prohibited purpose; or (2) the defendant discovers a violation through feedback from a third-party, testing (including adversarial testing or red-team testing), following state agency guidelines, or substantial compliance with the most recent version of the “AI Risk Management Framework: Generative Institute of Standards and Technology” or another nationally or internationally recognized risk management framework.
- Bars the OAG from bringing an action against a person for an AI system that has not been deployed.
- Authorizes state agencies to sanction regulated persons and assess penalties of up to $100,000 or to suspend or revoke a license or other certification under certain circumstances.
- Adds Chapter 553 to establish a Sandbox Program under the auspices of DIR.
- Adds Chapter 554 to establish the Texas Artificial Intelligence Council.
- 1/1/26 effective date.
TCJL Comments on AI Bill Draft
Rep. Gio Capriglione: Announcement of Draft AI Regulatory Bill – Input Invited
October 28, 2024
Dear Stakeholders,
After extensive effort and collaboration with over 300 industry experts, legislators, and dedicated staff, I am pleased to share the initial draft of Texas’ proposed artificial intelligence (AI) regulatory framework. This bill represents months of research, analysis, and consultation with a diverse range of stakeholders, including yourselves, aiming to build an innovative yet responsible AI governance structure for Texas.
This bill is the culmination of many contributions, both internal and informed by precedent-setting policies from other legislative bodies. Our work has sought to adapt best practices into a uniquely Texan framework that balances Texas’ commitment to fostering innovation while safeguarding fundamental rights and ensuring ethical AI deployment.
In broad terms, the proposed legislation will establish standards and safeguards to prevent algorithmic discrimination, protect user data, and promote transparency. Key features include a risk-based model for AI systems, strong protections against unacceptable use, transparency requirements, provisions to combat bias in automated decision-making, and a regulatory sandbox. Notably, the bill exempts low-risk AI systems and small businesses from burdensome requirements, ensuring Texas remains a friendly environment for innovation.
With this draft, we invite you to provide feedback to help us fine-tune a regulatory approach that meets the diverse needs of Texas industries, consumers, and developers. Please review the attached bill and provide feedback by Monday, November 18th. Join us for the next stakeholder meeting, where we look forward to a robust discussion on this important legislation on Friday, November 22nd at 1:00pm in the Capitol Auditorium. Please RSVP here. RSVPs by email will not be accepted.
Sincerely,
Giovanni Capriglione
State Representative
District 98
This week, Chief Justice Nathan Hecht referred nine rule issues to the Supreme Court Advisory Committee (SCAC), including two items of particular interest to TCJL members. Consistent with SCAC’s rules and processes, the issues will be referred to a subcommittee for review and recommendation.
Third Party Litigation Funding (TPLF)
In response to TCJL’s request to the Texas Supreme Court, the Supreme Court Advisory Committee (SCAC) will be reviewing the matter of Third-Party Litigation Funding (TPLF). The request was sent from Chief Justice Nathan Hecht to SCAC’s Chairman Chip Babcock earlier today with the following instruction:
Third-Party Litigation Funding. The Court has received the attached correspondence regarding third-party litigation funding agreements. The Committee should review, advise whether the Court should adopt rules in connection with third-party litigation funding, and draft any recommended rules.
Artificial Intelligence (AI)
The TCJL Task Force on AI will follow this rule process closely, and TCJL will host a CLE on AI on November 7, 2024, in conjunction with our 38th Annual Meeting in Austin.
Artificial Intelligence. The State Bar of Texas’s Taskforce for Responsible AI in the Law has issued the attached interim report recommending potential changes to Texas Rule of Civil Procedure 13 and Texas Rule of Evidence 901. The Committee should review, advise whether such amendments are necessary or desirable to account for artificial intelligence, and draft any recommended amendments.
The referral letter, including all nine issues and related attachments, appears below. Please stay tuned for updates and relevant TCJL meeting notices, and please contact our office if you would like to participate in the AI or TPLF workgroups. TCJL staff can be reached at info@tcjl.com or 512-320-0474.
SCAC Referral July 2024May 14, 2024
AI Stakeholder Meeting Hosted by Representative Gio Capriglione
January 22, 2024
AI Letter from Representative Gio Capriglione