Liability Considerations for Developers and Users of Agentic AI Systems

Lathrop GPM
Contact

Lathrop GPM

What Are Agentic AI Systems?

Agentic AI systems are artificial intelligence technologies that:

  • Operate autonomously,
  • Adapt to changing environments, and
  • Execute multi-step tasks based on user input or instructions.

These systems differ from reactive AI (e.g., chatbots), which respond to prompts but do not initiate or adapt actions independently. On the far end of the spectrum lies fully agentic AI, capable of complex, self-directed behavior. An example use case of an agentic AI system would involve the system researching a company for a business pitch, creating a pitch slide deck, recommending pitch attendees and scheduling the pitch meeting autonomously. You can take a look at OpenAI’s recently released AI agent to see how it works.

[In this update, we refer to a developer as a company that creates and sells agentic AI systems. Additionally, a deployer or user is generally an organization (or an individual) that uses these agentic AI systems in their operations.]

Emerging Liability Issues

Who is responsible for the actions of an AI agent? Historically, liability was typically assigned only when a human actor was involved. However, the rise of autonomous AI raises new legal questions.

In the Mobley v. Workday case, an individual brought claims against Workday for employment discrimination based on race, age and disability “alleging that Workday’s algorithm-based applicant screening tools discriminated against him and other similarly situated job applicants.” Workday’s tool used AI and machine learning to screen applicants for Workday’s business customers. Mobley had applied to over 100 positions through the Workday system and was rejected, sometimes within an hour of submitting his résumé and application, indicating no human review or interaction could have occurred in such a short timeframe.

Although “Title VII, the ADEA, and the ADA prohibit discrimination by an ‘employer’ or ‘employment agency’,” Mobley brought claims against Workday on certain agency and indirect employment bases. The court found partially in favor of Workday but allowed Mobley’s claim to proceed where Workday acted as an “agent” of employers. The court distinguished Workday’s system from spreadsheets and other tools as the Workday system is essentially acting in place of the human and “delegated responsibility.” This case sets a novel precedent whereby an AI vendor could face direct liability for employment discrimination claims or potentially other claims.

Analogy: GPS Liability

When GPS systems were introduced some 20 years ago, concerns arose about users blindly following directions into dangerous situations. Legal disclaimers and user warnings helped mitigate risk.

Agentic AI presents similar issues; however, agentic AI is more complex. Users authorize the AI to take multiple autonomous actions. If the AI acts within the scope of its instructions, it would seem fair and reasonable that the deployer may bear some responsibility.

Hypothetical Scenario: SaaS Fraud Detection

Let’s say an agentic AI system is used by SaaS provider (a deployer) to detect fraud or security within its SaaS platform. The agentic AI system finds fraud in an account and disables the customer’s account within the SaaS platform. What if the agentic AI system had only detected a false positive? The SaaS provider may not only have fallen below its SLA uptime guarantees for its SaaS product, but the SaaS provider may also be in breach of contract for ceasing to provide its SaaS platform to the customer.

If the customer brings a claim against the SaaS provider, would the SaaS provider then be able to recover under an indemnity from the agentic AI system if this was a malfunction in the AI system? Much of this will depend on the contract language between the parties, as well as new and novel applications of existing and new laws. For example, tort law and strict liability may come into play if an agentic AI system is found to be an inherently unsafe product.

Legal and Contractual Considerations

Key Questions

  • Do AI agents have legal authority to act?
  • Is the technology provider liable for the AI’s actions?
  • Should the user or deployer be considered a supervisor of the AI?
  • Is the agentic AI an inherently dangerous product, vulnerable to product liability issues?

This is where it is critical for AI developers that create AI agents to include proper terms to address risk and liability in their customer agreements. Developers should review user documentation and configuration guidance. An AI developer should also consult with their insurance broker and legal counsel to check how their policies address AI and if any policies exclude coverage in relation to certain AI uses.

Likewise, deployers or users of agentic AI systems should closely review and consider the risks of such use. Deployers should conduct risk assessments and review contractual indemnities and limitations. Businesses should ensure their company’s internal AI usage policies and guidelines consider and address these risks in advance.

We conducted a recent review of multiple AI agent platform contracts and terms of use, which revealed that:

  • Most include only standard disclaimers and the now typical AI disclaimers, and
  • Few address potential agency issues or customized terms for more agentic-AI-specific risks.

Open Legal and Ethical Questions

Developers of agentic AI systems must carefully consider whether to implement built-in guardrails to prevent foreseeable harm, as the responsibility for ensuring safety and ethical behavior increasingly falls on their shoulders. This raises important questions about the extent of developer liability, particularly in cases involving software defects, inadequate warnings or design flaws.

As regulatory frameworks evolve – such as the EU AI Act, which mandates human oversight – there is growing pressure to embed mechanisms that ensure accountability and control. One such mechanism involves configuring permission settings that strike a balance between system autonomy and human supervision. While some AI tools require deployer approval before taking further action, users often retain the ability to override these settings, which can increase both the system’s autonomy and the potential risks associated with its decisions.

Conclusion

As agentic AI systems become increasingly integrated into business operations, the legal and ethical frameworks surrounding their use must evolve in parallel. Developers and deployers alike must proactively address the unique risks these systems pose, particularly around autonomy, delegation of authority and liability.

Clear contractual terms, robust internal policies, and thoughtful system design – including human oversight and configurable permissions – are essential to mitigating potential harms. As case law continues to develop, organizations that anticipate and plan for these challenges will be better positioned to leverage the benefits of agentic AI while minimizing exposure to legal and reputational risks.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Lathrop GPM

Written by:

Lathrop GPM
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Lathrop GPM on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide