AI Governance in the Agent Era

Baker Botts L.L.P.
Contact

Baker Botts L.L.P.

“AI governance” is a rapidly developing field of research that focuses on the risks and controls related to AI platforms. Recently, a team of researchers from the Institute for AI Policy and Strategy has proposed a framework for such governance in the “agent era.” Notably, the risks associated with AI agents present unique challenges, as agents can cooperate with one another and perform real-world tasks independently of their principals.

In particular, the framework includes five categories of “agent interventions”:

  1. Agents should be aligned with their principals’ values. This may be accomplished by incorporating reinforcement learning, calibrating risk tolerances, and paraphrasing chain of thought outputs.
  2. Principals should maintain strict control over their agents’ behaviors. For example, principals should develop tools to void or roll back their agents’ actions, shut down or interrupt their agents’ tasks, and restrict the actions available to their agents.
  3. Principals should ensure that the behavior, capabilities, and actions of their agents are observable and understandable to users. These measures may include providing unique IDs for each agent, logging agent activities, and publishing detailed reports on the reward mechanisms for any reinforcement learning-based agents.
  4. Principals should employ security and robustness measures to mitigate any external threats to agentic systems or their underlying data. For example, standard access control and sandboxing measures should be implemented, and adversarial testing and rapid response defenses should be deployed on a consistent basis.
  5. Agents should be integrated with social, political, and economic systems. The researchers suggest that agents be provided with internal liability regimes which mirror relevant legal schemes, principals should provide mechanisms which allow agents to enforce agreements between each other (e.g., smart contracts), and principals should ensure that access to agentic services is provided to users on an equitable basis. Additionally, principals should ensure that detailed evaluations and monitoring are implemented at each stage of the development and deployment processes to ensure compliance with all of the above measures.

This framework represents many of the current best practices for the development of AI agents and should be considered by any company seeking to develop or deploy such systems.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Baker Botts L.L.P.

Written by:

Baker Botts L.L.P.
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Baker Botts L.L.P. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide