Have a SaaS Contract in Place? You May Need an AI Addendum

Harris Beach Murtha PLLC
Contact

Virtually every business has signed an agreement with a software as a service (“SaaS”) provider at one time or another. And now, virtually every SaaS provider (it seems, at least) is coming out with an AI-related feature or service. The introduction of AI-related services, and generative AI-related (“GenAI”) services in particular, by a vendor necessitates the inclusion of certain contract terms particular to this type of service. Because many SaaS contracts were signed prior to the current GenAI boom, companies may need of AI addendum to their contracts, i.e., contractual modifications designed to clarify AI usage, define responsibilities and mitigate legal exposure.

The first issue to consider is whether your company wants its software vendors using GenAI programs in the first place. While AI has the potential to enhance efficiency and automate complex tasks, organizations should have the option to evaluate and control when and how AI is applied. Without clear contractual limitations, a vendor may introduce AI features that process sensitive data or perform critical business functions without sufficient oversight. Requiring vendors to obtain prior consent before deploying AI ensures that its use aligns with internal policies and risk tolerance, provides an opportunity to assess compliance with relevant regulations, and prevents sensitive data from being unintentionally processed or analyzed by AI. If an AI tool is introduced mid-contract without proper vetting, an organization could find itself exposed to unexpected risks, including regulatory violations, biased decision-making or unreliable outputs.

Accordingly, if the use of GenAI programs is an issue for your company, you will want to make sure that the addendum includes language that the vendor may not use GenAI programs in connection with the provision of services to your company or feed your company’s data into a GenAI program. Alternatively, you could set guardrails with regard to the types of uses that are permitted and those that are not.

Intellectual Property Ownership

Another key issue is the ownership and use of data processed by AI systems. With AI models generating insights, text, images, video and other forms of content, it is critical to establish who owns these AI-generated outputs. Many vendors claim broad rights over data produced by their AI models, raising concerns about intellectual property ownership and confidentiality. Businesses should ensure contracts explicitly define who owns AI-generated content, particularly in creative or strategic applications. At a minimum, companies should make sure the vendor assigns any rights it may have in the output to the company. In some cases, vendors attempt to retain certain rights to AI-generated outputs or certain types of outputs. This can create complications if an organization intends to use those outputs for commercial purposes or maintain exclusive control over sensitive business information. Without clear contractual language, businesses risk losing control over proprietary data and intellectual property, which can lead to competitive disadvantages.

Training GenAI Systems with Customer Data

Closely related to data ownership is the concern over whether customer data is being used to train vendor AI models. A number of AI providers are leveraging customer data to refine and improve their machine learning systems, sometimes without clearly disclosing this practice. While some organizations may be comfortable with anonymized data - if the data is being anonymized at all – being used for model improvements, other organizations cannot afford to take such risks. The use of proprietary or sensitive data to train AI models can create significant legal exposure, particularly if the AI system produces biased or inaccurate results or results in the disclosure of sensitive information. The best practice is adding language to the AI addendum that precludes the vendor from using company data to train its models, and only permits the vendor to use that data for purposes of performing its obligations under the contract. This restriction is particularly crucial in industries such as health care, finance, and legal services, where improper data use could result in data privacy breaches, compliance violations, regulatory penalties or breaches of client confidentiality.

Indemnification and Limitation of Liability

Another crucial aspect of AI-related vendor agreements is liability and indemnification. Large language models are trained using massive amounts of publicly available data, much of which may be subject to copyright. AI-generated outputs may infringe third-party copyright or other intellectual property rights. Many of the terms of use associated with enterprise versions of popular GenAI providers indemnify users for third-party copyright infringement claims based on their programs’ outputs. But it is highly unlikely that a preexisting SaaS contract would do so. Companies should seek indemnification clauses that protect them from lawsuits or regulatory penalties arising from, for example, third-party IP infringement claims based on a violation of applicable law resulting from the use of the AI model as authorized by the vendor. Without such protections, companies may find themselves exposed to significant risks without recourse against the vendor.

Beyond data usage, organizations must also ensure vendors remain compliant with evolving AI-related laws and regulations. The legal landscape surrounding AI is rapidly changing, with governments and regulatory bodies worldwide introducing new frameworks to address concerns such as data privacy, algorithmic transparency and bias mitigation. Given the difficulty inherent in passing comprehensive AI legislation on the federal level, states are likely to step in to fill the gap, much like the have done in the data privacy context. In the US alone, over 600 AI-related bills were introduced at the state level. Companies should ensure AI-related contract provisions mandate vendor compliance with all applicable laws and industry standards. This includes adherence to major data protection regulations like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), as well as sector-specific laws governing AI applications in areas such as financial services, health care and employment.

Companies should also consider adding compliance with the law to the scope of the indemnification clause, and excepting out compliance with the from any limitation of liability section. Additionally, and optimally, contracts should require vendors to implement ongoing monitoring and assessments of their AI tools to ensure they remain compliant as legal standards evolve. Without these provisions, organizations could find themselves liable for AI-related regulatory infractions, even if a vendor is responsible for the underlying technology.

Bias

Ethical and responsible AI use is another critical area that may require explicit contractual safeguards. AI-driven decisions can be opaque, and in some cases, biased or discriminatory. As businesses increasingly rely on AI for decision-making, vendor agreements should establish clear expectations around transparency, bias mitigation and accountability. Optimally, and depending on the type of GenAI product being provided, companies should try to include language requiring vendors to disclose information about how their AI systems operate, provide explanations for automated decisions and offer mechanisms for organizations to audit outcomes. Bias mitigation is particularly important in AI applications involving hiring, lending, health care and other high-stakes decision-making processes. Many of the AI-related laws proliferated at the state and even municipal level relate to biased outcomes. Without contractual guardrails, organizations risk reputational harm or legal challenges if AI tools produce unfair, discriminatory or misleading results.

Key Takeaway

As AI technology continues to evolve, preexisting software vendor contracts must keep pace with these changes. By proactively addressing AI-related risks through clear contractual provisions in an AI addendum, companies can better leverage AI’s benefits while minimizing exposure to unforeseen legal and operational challenges. Organizations that fail to update their contracts to account for AI risks may find themselves facing significant liabilities, regulatory scrutiny or loss of control over critical business functions. A well-drafted AI addendum is critical for minimizing risk and ensuring that companies maintain control over when and how their vendors employ AI solutions.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Harris Beach Murtha PLLC

Written by:

Harris Beach Murtha PLLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Harris Beach Murtha PLLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide