Agentic AI refers to artificial intelligence systems capable of autonomously pursuing goals without requiring constant human intervention. Unlike traditional reactive AI – designed to respond to human prompts – agentic AI operates proactively. These systems proport to be context-aware, able to make complex decisions independently, and capable of interpreting nuances, adhering to constraints, and aligning actions with organizational policies and objectives.
Driving Business Transformation Through Agentic AI
From backend systems to customer-facing services, agentic AI may present promising – though not unlimited – possibilities for streamlining workflows and reducing manual burden. Key areas of emerging utility include:
- Streamlining Operations: Automate routine tasks like scheduling, data entry, and support ticket resolution.
- Enabling Smarter Decision-Making: Synthesize real-time data to generate insights and propose strategic actions.
- Optimizing Resources: Adjust workflows dynamically to reduce waste and increase output.
Strategic Integration is Essential
Successful adoption of agentic AI requires a deliberate and thoughtful approach. During the initial deployment phase, organizations must invest in training, contextual fine-tuning, and integration with existing infrastructure. These early efforts are not just necessary—they are foundational to achieving a comprehensive risk mitigation strategy.
Emerging Legal and Ethical Questions
As agentic AI assumes more autonomy, its actions may trigger real-world consequences – yet legal responsibility remains unclear. Three illustrative scenarios highlight emerging liability concerns:
Situation 1 – Hardware Failure: While flying home for the holidays, a defective latch causes an overhead bin to open mid-flight; a passenger is injured by falling luggage.
Legal Clarity: Product liability laws generally place fault with the component manufacturer unless the airline was otherwise negligent.
Situation 2 – Chatbot Misinformation: A passenger is misled by an airline’s chatbot about a bereavement fare refund and is later denied reimbursement.
Legal Ambiguity: Who is liable – the developer, the airline, or the chatbot?
Case Precedent: In Moffat v. Air Canada (2024), a Canadian court held the airline liable for its chatbot’s misstatements, rejecting the argument that the chatbot was a “separate legal entity.” The takeaway is that companies can face legal liability for AI hallucinations or inaccurate information provided by chatbots or agentic AI, especially if they fail to implement safeguards like disclaimers and human oversight.
Situation 3 – AI as a Contract Negotiator: An individual hires an agentic AI travel agent to book a trip to Costa Rica. The individual wants to fly to San Jose, Costa Rica. The individual arrives to the airport only to realize that the agentic AI has made an error and booked a flight to San Jose, California. There are no flights available to San Jose, Costa Rica, the individual is dismayed, and their trip is thrown into turmoil. This scenario raises key questions:
(1) Can a contract be negotiated and agreed to by an autonomous AI (i.e. the agentic AI purchased a ticket and entered a contract with the airline?
(2) If the agentic AI makes a mistake, who is responsible for that mistake? The AI creator? The company that employed the AI (i.e. the travel agent)? The consumer who paid for the ticket? The airline?
Risk Mitigation Strategies for Businesses
To proactively reduce exposure, organizations should:
- Establish Decision Accountability: Clearly document which decisions are AI-made vs. human-reviewed.
- Test: Prior to implementation of any agentic AI, test the product to ensure that it has the capabilities described by the vendor.
- Implement Monitoring Protocols: Use tools to continuously evaluate the accuracy, fairness, and legality of AI outputs.
- Update Contractual Language: Incorporate agentic AI-specific clauses into vendor, employee, and customer agreements.
- Enhance Training and Governance: Ensure employees understand the implications of interacting with AI and follow internal protocols.
- Plan: Have a plan for how your company will respond in the event the agentic AI makes a mistake (hint – it’s the AI’s fault, not ours, will not cut it). Assume that at a certain point, the agentic AI will execute in a manner inconsistent with your expectations.
Conclusion: Building Resilience Through Responsible AI Adoption
The rapid rise of agentic AI will force legal systems, regulators, and enterprises to rethink how responsibility is assigned. While the law catches up, business adopting this technology must take initiative: understand the risks, update governance models, and proactively define accountability framework.
Now is the time to review your legal risk profile and ensure your organization is ready to harness the benefits of agentic AI – safely and responsibly.