This update was prepared by Geneva Cline, TCJL’s Research Intern.
The Artificial Intelligence Advisory Council set in place by H.B. 2060 has now been active for over six months, and it would appear as though the main purpose of the Council is to examine artificial intelligence employed by the State for concerns related to national security and privacy in society. Some examples of these concerns are international hacking, facial recognition, social scoring, and social media data privacy in the context of national security.
However, some of these appear to be less pressing than other privacy concerns such as relates to HIPPA, the black box phenomenon (where it is impossible to discover the exact processes or layers an AI system uses to come to a final decision), monitoring without consent in the workplace or on social media, and the potential for AI systems to evolve and bypass privacy protocols put in place by AI manufacturers. Taking all of this into account, it is likely that the AI Advisory Council will be closely examining AI systems used by the State for capability to evolve through the black box phenomenon and bypass privacy protocols. It is also likely, given the current social media climate, that social media data sharing as pertains to AI will be monitored; however, this does not exactly fit the original purpose of the Council.
Beyond privacy, potential for discrimination in AI systems will probably continue to be an area of focus, especially with the recent push against DEI by the legislature. One can also expect to see something regarding fraud and deepfakes in the Council’s final report. Moreover, there are other concerns such as intellectual property infringement and foundation model data set sources that are somewhat pressing and ought to be addressed at some point but are less likely to be addressed by the AI Advisory Council given its original intent and the systems it intends to examine.