AI Use: What Should We Keep in Mind When Using AI Tools?

Orrick, Herrington & Sutcliffe LLP
Contact

Orrick, Herrington & Sutcliffe LLP

This update is part of our AI FAQ Series. Learn more at our AI Law Center.

1. What questions should I ask before using an AI tool?

There are a number of questions that should be considered before using an AI tool, including: (1)  for what purpose(s) is the tool being used, (2) who will be using the tool within the organization, (3) what are the terms governing the use of the tool, (4) what commercial terms does the provider offer, (5) what types of information will be input into the tool, (6) will there be adequate quality controls and human involvement with outputs, (7) will use of the tool be internal or external, and (8) is the use subject to specific legal obligations. 

2. What questions should I ask my AI vendor?

Key questions to ask your AI vendor include:

  • Is my data going to be used to train public models?
  • How does prompt engineering and fine-tuning work for your model?
  • Who owns the data input into and generated by your AI tools?
  • Who are some of your enterprise-level customers?
  • Upon what third parties do you rely? Can they access our inputs? How quickly are they required to notify you of a breach or system failure?
  • Can you provide details about your data access controls?
  • Can you provide documentation of compliance and audit results?

Also consider whether any specific transparency or risk assessment obligations apply to the vendor and ensure that these are being satisfied.

For more detail, read our update: 8 Intellectual Property and Commercial Questions to Ask Your Generative AI Tool Provider.

3. Can I input this data into the AI tool?

Whether or not you may input certain data into an AI tool may depend on the terms governing the use of the AI tool, the data being input, the potential use cases of the output, your company’s AI and data-related policies, or other factors. In general, you should avoid inputting any data that may be considered confidential or proprietary, or personal data, without express authorization from the appropriate teams, partners or customers.

Companies should consider establishing a clear point of contact for questions or issues related to the use and implementation of AI, possibly within existing data and compliance structures, and employees should seek approval from the relevant authority within their company (e.g., from the legal team or AI governance committee) and consult any applicable policies before inputting any data into an AI tool.

4. What rights am I giving up to my input into an AI tool?

What rights you may be giving up to your input into an AI tool largely depends on the terms that govern the relevant AI technology. In some cases, you may be granting rights that allow the AI tool provider to train or improve its services (including its broader AI models) on your inputs or to use your inputs for the provider’s own business purposes.

The authority responsible for AI governance within an organization (e.g., the AI governance committee or legal) should offer clear guidance to teams when an AI technology is implemented regarding any potential risks to AI inputs. In addition, employees should contact the authority responsible for AI governance within their organization and consult any applicable policies before inputting any information that may be confidential or proprietary.

5. How do I protect my input into an AI tool?

Organizations should take a multi-pronged approach to protecting AI model inputs. An AI Acceptable Use Policy can serve as an effective starting point, which, along with other related guidance, can establish guideposts for the types of data employees should or shouldn’t input into a relevant AI tool. These policies can also communicate prohibited and permitted uses of AI-enabled tools across the organization, which can reduce risk of improper use. These policies should be generally applicable, but should be informed by the specific AI technology, the potential uses and the end-users.

Technical measures are also important here. Secure APIs and network security can reduce the risk of data leakage. Organizations should also thoroughly vet their AI providers through vendor security assessments. Relatedly, organizations should carefully review contracts with such providers and negotiate adequate protections for inputs. Negotiating a standalone enterprise agreement might lead to more customer-favorable terms than agreeing to a vendor’s standard terms of use.

6. Will I own the output from the AI tool?

Ownership of outputs is generally dependent on the terms that govern the tool being used. In many cases, terms governing AI technologies allocate ownership of any resulting outputs to the customer and provide that the tool provider does not claim ownership of the outputs.

However, the U.S. Patent and Trademark Office and U.S. Copyright Office currently do not recognize AI-generated content as being registrable—meaning that in many cases, you can’t get a registered copyright or patent for where the relevant material was generated by AI. Trade secret protection remains viable, so long as the steps necessary to create and maintain a trade secret are taken.

Given the limitations on protection, entities should carefully consider using AI technologies to create outputs that the entity would expect to be proprietary. The position on ownership may vary depending on the jurisdiction.

7. Can I use this open-source model?

Depending on your organization and its needs, there may be both advantages and disadvantages to using open-source models.

On the one hand, open-source models may be more transparent, giving companies a better understanding of the model’s architecture, data training and function. In addition, integrating open‑source models into company infrastructure can give companies greater control over their data than relying on third‑party services.

On the other hand, the source code for open-source models (and potentially other detailed technical information about such models) is often publicly available, which may increase the risk that bad actors can find ways to exploit the model. Relatedly, open-source models may have been trained on unfit datasets, which may have included biased data, unauthorized personal data and erroneously labeled data, or may not include the same safety features as a paid model.

Like other open-source software, AI models may come with licensing terms that require users to license their proprietary software under an open-source license. Moreover, open-source models may not be as user‑friendly and lack the dedicated enterprise customer support that “closed” model providers may offer.

Consequently, organizations should consult with subject matter experts and key stakeholders prior to deciding whether to use an open-source model.

8. What can I say publicly about our use of AI?

Public statements about your use of AI or AI-related capabilities must be truthful, non-misleading and aligned with regulatory advertising guidelines. You can discuss the benefits and intended use of your AI systems, but you should take care to avoid any overstatements about capabilities or caliber (i.e., “Our AI is the most secure platform for processing data in the world.”). You should have a reasonable basis for any such statements and avoid adopting boilerplate language about capabilities or in risk disclosures.

Where fact-based comparisons to others’ products are made, they should be verifiable, and you should retain documentation of the underlying basis for the claims. This applies to any type of disclosure, advertisement and most other statements outside of the company, even if directed to a single potential partner or investor. U.S. regulators, including State Attorneys General, the Department of Justice, the Consumer Financial Protection Bureau, the Securities and Exchange Commission, the Federal Communications Commission and the Federal Trade Commission, have shown they are paying attention to these types of statements and, in some cases, taking action.

9. Do the standards of care apply to my use of AI?

The standard of care applicable to AI may depend on the context and the jurisdiction in which AI is deployed. In general, however, case law does not yet provide firm guidance on the relevant standard of care. The standard of care applicable to the use of AI may evolve significantly as courts adjudicate negligence claims implicating AI.

10. Can I be liable for my use of AI?

Yes. There is an emerging trend of companies being held responsible for uses of their AI that result in harm.

Utah’s first-in-the-nation AI legislation explicitly states that claiming generative AI was the cause of a violation is not a defense under the state’s consumer protection laws. Europe is revising its liability rules to address damage caused by AI systems—specifically, the AI Liability Directive aims to ensure that individuals harmed by an AI system can benefit from the same level of protection as individuals harmed by other technologies in the EU. In addition, recent litigation suggests that companies may be held liable for customer service chatbots that miscommunicate company policy to a consumer who detrimentally relies on that information.

Notwithstanding, determining liability for a use of AI that results in harm is a complex and evolving area. Accordingly, companies should carefully monitor developments in this space.

11. Does my insurance policy cover my uses of AI?

Existing insurance policies may provide some protection against many of the risks posed by AI. For example, many cyber policies provide coverage for regulatory liability and media liability that could prove relevant to AI-related risks. In addition, technology errors and omissions insurance policies may provide protections for companies that provide AI-powered products or services in contexts where mistakes could lead to bodily injury or property damage.

Notwithstanding, policyholders should review their existing insurance policies to determine whether they have adequate coverage for AI risks relevant to their business and carefully assess any AI-related policy changes at renewal.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Orrick, Herrington & Sutcliffe LLP

Written by:

Orrick, Herrington & Sutcliffe LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Orrick, Herrington & Sutcliffe LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide