As promised, legislation aimed at regulating the development and use of artificial intelligence systems has been added to the hopper for the 2025 session.

HB 1709 closely tracks draft legislation that was circulated to stakeholders last month. As you may recall, TCJL offered detailed comments on that draft in hopes that significant changes would be made to the version ultimately introduced. This did not occur, although we are pleased to see that the filed version eliminates the private right of action, which all of the business groups who responded to the draft requested be removed. At the same time, the draft makes at least a couple of changes that could make the application and enforcement of the legislation more uncertain and difficult to predict from a risk management standpoint.

One of these changes replaces the private right of action with a “consumer right to appeal.” This brief provision allows a consumer to “appeal a consequential decision” to the deployer (i.e., a utility, insurance company, lender, employer, or health care provider). Upon an appeal, the consumer “shall have the right to obtain from the deployer clear and meaningful explanations of the role of the high-risk  artificial intelligence system in the decision-making procedure and the main elements of the decision taken.” Given its brevity and lack of any description of a compliance standard, this provision may be a placeholder. It’s also not entirely clear that the provision avoids the problem that the elimination of the private right of action was meant to avoid. Presumably, the way the bill works now is that a consumer unhappy with the deployer’s response would turn to the attorney general for an enforcement action, triggering the civil investigative demand process and liability for immense administrative and civil penalties. The new draft actually adds two new grounds for penalties, an enhanced penalty for deploying an “uncurable” system, and authority for licensing agencies to levy administrative fines and pull licenses. Certainly, these provisions broaden the risk horizon of the proposal somewhat.

The other major change we noted pertains to digital service providers and social media platforms. The previous draft of the bill called upon these entities to “make a commercially reasonable effort” to prevent advertisers from deploying non-compliant AI systems. The filed version dispenses with the “commercially reasonable effort” standard and simply mandates that the entities “require advertisers . . . to agree to terms preventing the deployment” of a high-risk AI system “that could expose the users of the service or platform to algorithmic discrimination or prohibited uses.” This appears at first blush to be a little better for DSPs and platforms, since it now makes the issue a matter of contract between those entities and advertisers. If an advertiser violates the agreement, the entities at least appear to have a breach of contract action to help protect themselves. But the provision does not protect the entities from independent litigation on negligence and other tort theories based on the entities’ contracting practices (i.e., the entity knew or should have known that an advertiser was deploying a noncompliant system). In any event, we continue to believe that DSPs and social media platforms can really only shield themselves from liability if they don’t accept advertising that deploys AI.

A brief synopsis of the liability-related provisions of the bill is given below. No doubt it’s still a work in progress.

HB 1709 by Capriglione (R-Southlake): Adds Chapter 551, Business & Commerce Code, as follows:

Key Definitions:

  • “Algorithmic discrimination”—“any condition in which an artificial intelligence system when deployed creates an unlawful discrimination of a protected classification in violation of state or federal law”
  • “Consequential decision”—“decision that has a material legal, or similarly significant, effect on a consumer’s access to, cost of, or terms of” enumerated categories, including a criminal matter, employment, financial services, residential utility services, health care services, housing, insurance, transportation services, constitutionally protected services or products (i.e., guns), and elections
  • “High-risk artificial intelligence system”—“any [AI] system thatis a substantial factor to making a consequential decision” (exempts an AI system “intended to detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review”

New Statutory Duties for Developers:

  1. Duty of “reasonable care” to protect consumers for “known or reasonably foreseeable risks of algorithmic discrimination”
  2. Duty to inform distributors and deployers of correction, recall, or withdrawal
  3. Duty to investigate if developer “becomes aware or should reasonably be aware” of unlawful use of system
  4. Duty to inform the attorney general of non-compliance and corrective measures
  5. Duty to keep “detailed records” of training sets used to develop a generative system
  6. Duty to provide documention and information necessary to assist deployer in preparing impact assessments
  7. Duty to disclose certain information to consumers
  8. Duty to identify, prior to deployment, potential risks or algorithmic discrimination and implement risk management policy

New Statutory Duties for Deployers:

  1. Duty of “reasonable care” to protect consumers for “known or reasonably foreseeable risks of algorithmic discrimination”
  2. Duty to suspend use of non-compliant system
  3. Duty to inform developers and distributors of non-compliant system
  4. Duty to assign competent, trained human oversight of “consequential decisions” made by the system
  5. Duty to complete an annual impact assessment and within 90 days of any substantial or intentional modification of system (for whom?)
  6. Duty to disclose, following an intentional or substantial modification, the extent to which the system was used in a manner consistent with or varied from developer’s intended use
  7. Duty to retain assessments and pertinent records for three years
  8. Duty to review annually each system to ensure no algorithmic discrimination
  9. Duty to disclose certain information to consumers
  10. Duty to identify, prior to deployment, potential risks or algorithmic discrimination and implement risk management policy
  11. Duty to notify council, attorney general, or regulatory agency within 10 days of discovery of algorithmic discrimination or inappropriate or discriminatory consequential decision
  12. Duty to suspend use of system if the developer discovers use of unlawful inputs or production of outputs
  13. Duty to inform the council and attorney general as soon as practicable of developer’s discovery of unlawful inputs and outputs
  14. Deployers who put their name or trademark on a high-risk AI system already placed in the market or who intentionally and substantially modify a system are treated as developers

New Statutory Duties for Digital Service Providers and Social Media Platforms:

  • Duty to require advertisers to prevent deploying a system that could expose users to algorithmic discrimination

Unlawful Development/Deployment:

  • Use of subliminal techniques “beyond a person’s consciousness”
  • Use of purposefully deceptive or deceptive techniques with effect of materially distorting the behavior of a person or group “by appreciably impairing their ability to make an informed decision causing the person to make a decision the person wouldn’t have made in a manner that causes or is likely to cause significant harm
  • Social scoring
  • Capturing a biometric identifier
  • Inferring or interpreting sensitive personal attributes of a person or group using biometric identifiers
  • Use of characteristics of a person or group based on race, color, disability, religion, sex, national origin, age, or a specific social or economic situation to materially distort the behavior of a person that causes or is likely to cause significant harm
  • Infer the emotions of a natural person
  • Unlawful visual material in violation of § 43.26, Penal Code (child pornography) or § 21.165, Penal Code (sexually explicit deep fake videos)

Enforcement and Penalties for Developers and Deployers:

  • Attorney general enforcement
  • Civil investigative demand authority
  • 30-day notice of violation before bringing an enforcement action
  • 30-day right to cure
  • Injunctive relief, attorney’s fees, and expenses
  • Administrative fine for failure to cure of $5,000 to $10,000 per violation
  • Administrative fine for an uncurable violation of $80,000 to $200,000 per violation
  • In the case of a prohibited use, administrative fine of $40,000 to $100,000 per violation
  • Allows state licensing agencies to assess additional fines against individual licensees of $100,000 and license suspension, probation, or revocation

Presumptions/Defenses:

  • Rebuttable presumption that developer, distributor, or deployer used reasonable care if they comply with their statutory duties

Consumer Appeal

  • Consumer may “appeal a consequential decision” and “shall have the right to obtain from the deployer clear and meaningful explanations of the role of the high-risk artificial intelligence system in the decision-making procedure and the main elements of the decision taken”

Constitutionality:

  • Bars a court from construing the statute to adversely affect rights and freedoms of a person, including right of free speech

Pre-emption:

  • Pre-empts local ordinances

Exemptions:

  • Small business (USSBA definition)

New Consumer Rights under Data Privacy Act

  • Amends various sections of Chapter 541, Business & Commerce Code, to demand the following from a data controller: (1) to know whether the consumer’s personal data is or will be used in any AI system and for what purpose; and (2) to opt-out of the sale of personal data for use in an AI system.

New Controller and Processor Duties under Data Privacy Act

  • Controller must acknowledge the collecton, use, and sharing of personal data for AI purposes
  • Processor must assist controller in the security of data collected, stored, and processed by AI systems

Pin It on Pinterest

Share This