AI Hallucination Reveals More Than Its Creator Bargained For

Cranfill Sumner LLP
Contact

Cranfill Sumner LLP

The concept of Artificial Intelligence (AI) “hallucinating,” i.e. generating answers and sources that do not exist, is widely sweeping the popular lexicon. However, more troubling is when AI platforms do not hallucinate at all, instead providing very real answers – your financial and personal identifying information.

What Is an AI Hallucination?

Often, the term “AI” refers to generative Artificial Intelligence (GenAI) such as ChatGPT, ServiceNow, Pega, and other popular services. GenAI uses large sets of data and proprietary algorithms to predict answers to search queries. In some instances, GenAI can reveal information that its creators do not want revealed, either through prompt/query engineering or a GenAI prediction error. AI hallucinations can range from simple errors to fabricated content in large portions that look and feel authentic. AI-generated reports have credited authors with writing articles that never existed and have cited cases for legal briefs that were entirely nonexistent. While AI hallucinations can fabricate data in the form of videos, images, and audio that are fictitious, equally troubling, if not more so, is the potential for misuse of individuals’ personal information – a concern that raises questions as to the effectiveness of existing data privacy protection laws.

Reports of recent incidents describe how one accounting firm’s GenAI model, Sage Copilot AI, revealed to the firm’s clients confidential financial data from multiple other customers[1].  Incidents such as that raise the obvious question – what laws are needed to protect the public? 

What Laws Regulate Data Privacy in the World of AI?

A natural approach to AI regulation is focused on data privacy. Existing privacy laws started to come online before the recent uptick in AI use.  In the U.S., federal legislation addressing privacy has generally been focused on specific, limited industries or sectors, such as laws protecting personal health information (HIPAA and HITECH) or financial data (Gramm-Leach-Bliley Act).  The California Consumer Privacy Act (CCPA) is currently the most comprehensive data privacy law in the country. Virginia and Colorado, among other states, have similarly comprehensive laws modeled on the CCPA. While none of these laws directly limit how a GenAI model trains on a person’s data, California residents do have a right to ask that their personal data be deleted, or its use be limited. Violations of the CCPA, such as when a GenAI model reveals personal financial, medical, or other identifying information, could result in individuals suing the company responsible for the GenAI model. In Europe, privacy legislation and regulations such as the General Data Protection Regulations (GDPR) represent the Continent’s efforts to create comprehensive privacy rules that were not sector-specific.  Nevertheless, the rapid growth of GenAI has eclipsed the ability of such privacy legislation and rulemaking to protect individuals’ personal information from the automated data mining, collection, and production that GenAI undertakes.

Foreign jurisdictions are beginning to regulate AI directly. Currently, the most encompassing laws specific to AI are from the European Union. The EU’s attention to AI governance (more precisely, to the protection of personal information) has been evolving since at least 2018 when the European Commission set up an expert group to advise the Commission on AI issues and draft guidelines for the ethical use and development of AI.  Out of this effort came the AI Act, which established a comprehensive regulatory environment for AI that calls for classification, analysis, and regulation of AI systems, based on the risk that they pose to users. The AI Act heightened controls for riskier applications. GenAI is not necessarily designated as high risk, per se, but it will be required to comply with transparency obligations – namely, disclosing to users that content was generated by AI. Potentially, AI systems that pose extreme and unreasonable risks stand to be prohibited under EU law. An example of an “extreme and unreasonable risk” is an AI system using facial recognition for racial discrimination outside of law enforcement applications.

AI and Data Privacy Regulation in the United States

Whether a similar model for comprehensive federal AI legislation in the U.S. is on the horizon is yet to be seen. There are few, direct federal regulations specifically addressing GenAI. The Federal Trade Commission, under former Chair Lena Khan, attempted to regulate GenAI without success; but that aside, little actual progress has been made at the federal level. The current Administration seeks to make United States-developed GenAI and general AI world-leading by removing regulatory obstacles. At an AI summit in Paris on February 11, 2025, Vice President J.D. Vance warned world leaders against excessive regulation of AI, asserting that doing so will strangle that budding industry sector. He also suggested that other countries’ efforts to regulate AI were designed to stymie American AI development, as he stated, “some foreign governments are considering tightening the screws of U.S. tech companies.”[2] Given that outlook toward controlling AI, it remains unlikely that a federal regulatory regime will emerge in the near future to address the current risks inherent in GenAI. However, a number of states have entered the fray and have begun to enact state-specific AI laws and regulations. In 2024, almost 700 legislative proposals included AI-related requirements, and just over 100 were enacted.  Six states have enacted legislation targeting AI – in particular focusing on protecting the privacy rights of individual data subjects. Eleven others considered but did not pass legislation in the past year.  These laws will be in addition to the patchwork of state-level data privacy laws and regulations already on the books, which, collectively, for now, are the most robust legal guardrails in the development of GenAI in the U.S.

Some states, such as Illinois, have enacted laws regulating the use of AI in employment settings, prohibiting the use of AI in a discriminatory manner. They also require consent from job applicants before using AI in conjunction with interviews. Others, such as Florida, have focused more on the impact of AI in the political process, requiring disclaimers, and also to prohibit the use of GenAI in conjunction with child sexual abuse material.  Colorado has taken an approach similar to (though less comprehensive than) the EU by applying a risk-based approach to identify and regulate high-risk AI developers.  California, one of the first states to enact AI legislation, in 2019 outlawed the use of anonymous bots online to promote sales or affect the outcomes of elections.  A number of other states have taken a more cautious approach and have created commissions to examine the risks and benefits of GenAI before recommending regulatory actions.

The increased use of GenAI models will likely be accompanied by more data breaches such as those that have been recently reported. The likely absence of comprehensive federal AI regulations and the increasing patchwork of state and foreign laws and regulations will require companies to expend significant resources in compliance efforts. To understand and navigate the growing body of laws in this area, Companies that use and/or develop GenAI models should constantly monitor developments in the AI regulatory landscape and seek legal advice when using or offering GenAI as a service.


[1] Frank Landymore, Accounting Firm’s AI Caught Telling Customers About Each Others’ Financial Records, The Futurist, retrieved from https://tinyurl.com/y8rvdx2b (Jan. 21, 2025).

[2] Sam Schecter and Stacy Meichtry, JD Vance Warns U.S. Allies to Keep AI Regulation Light at Paris Summit, The Wall Street Journal, retrieved from https://www.wsj.com/tech/ai/vance-warns-u-s-allies-to-keep-ai-regulation-light-aa33c008?mod=hp_lead_pos2 (February 11, 2025).

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Cranfill Sumner LLP

Written by:

Cranfill Sumner LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Cranfill Sumner LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide