Exploring the AI Landscape from Policy to Practice

DLA Piper
Contact

DLA Piper

[co-author: Guy Mathews]

In a recent webinar, Kit Burden, Partner in our Tech and Sourcing team caught up with Gareth Stokes, Global Co-Chair of our Technology Sector and AI Practice, along with Aarushi Jain, Partner in the Technology and Media team at Cyril Amarchand Mangaldas to discuss AI in the global market. In addition to looking at the current global AI regulatory landscape, they explored the key legal challenges of AI and the contracting trends developing around them, and highlighted the competitive advantages brought about by early and effective AI adoption through robust and innovation-friendly governance frameworks.

The global regulatory landscape: a bird’s eye view

The rapid development of AI has posed significant challenges for lawmakers. While AI has sparked a global surge in legislative and regulatory initiatives (with at least 60 countries having adopted some form of AI policy since 2017) most jurisdictions still lack targeted AI laws, and the approach to regulating AI differs across regions.

AI-specific laws in place

  • The EU established horizontal risk-based regulation through the EU AI Act, which applies stringent rules to higher-risk AI uses, prohibits certain use cases outright, and imposes large fines for breaches (up to 7% of global annual turnover or €35 million, whichever is higher. The EU AI Act is extra-territorial in its application, meaning that even when not contracting directly in the EU, certain activities within the EU may bring an organisation within scope of the Act.  
  • China has moved fast to implement specific laws for GenAI and recommendation algorithms. These pieces of legislation are supported by comprehensive safety governance frameworks and regulatory guidance.

AI-specific laws proposed

  • Australia is exploring mandatory guardrails for high-risk AI settings, complemented by voluntary AI safety standards and ethical principles.
  • In the US, the Trump administration has abandoned proposals made by the previous administration aimed at promoting safe AI use, instead focussing on removing any barriers to innovation and leveraging the US’ status as a worldwide leader in AI and the home to many frontier models. State-level regulation is expected, with some States having already implemented rules, but organisations should monitor any impacts of the Trump administration’s ‘big beautiful bill’, which prohibits both federal and state regulation of AI. Organisation contracting for AI in the US may have more freedom for innovation but should be aware of the risks and uncertainties due to the absence of regulation.
  • The UK government has signalled its intention to adopt a flexible, context-based, and proportionate approach to the regulation of AI, with plans to harness and adapt a principles-based framework for existing sector regulators to interpret and apply to the development and use of AI systems. Parliamentary bills have included those relating to AI systems in the public sector, and there are ongoing debates about reforming intellectual property law to balance the need for innovation and rights protection.

No AI-specific laws

  • In India, while there is no specific AI legislation in place, existing laws regulate AI use, with further legislation and initiatives serving to address any gaps. For instance, the proposed Digital India Act will provide for AI platform traceability and lawful use, and the government has established a committee to review AI’s impact on copyright law.

Despite these differences, common fundamental themes of AI governance are shared across jurisdictions, with a focus on accountability, security, user transparency, and the need for human oversight.

The compliance opportunity

While improper AI use poses legal risks, the bigger risk for organisations is lagging behind competitors who are boldly embrace AI. Developing clear AI governance frameworks and policies should not be seen as an administrative burden but as a mechanism to empower an organisation to develop and procure the most innovative and effective AI solutions, and organisations which do so will gain first-mover advantage over competitors. Effective AI governance involves a targeted and strategic uplift of an organisation’s existing  governance frameworks to manage AI risks without stifling innovation or creating further compliance hurdles.

The panellists discussed the six steps to good AI governance:

  1. secure buy-in from senior leadership
  2. create a committee representative of all stakeholders to ensure AI is adopted in a way beneficial to all business functions
  3. identify AI use cases to determine applicable statutory rules and
  4. assess wider legal and commercial risks and opportunities associated with a particular use case
  5. set up controls which evaluate the likelihood and severity of risks and ensure corresponding oversight, assurances, and other contractual protections are in place
  6. regularly revisit and update governance processes to keep pace in this fast-moving area.

Emerging market standards in AI contract drafting

Regardless of the legislative backdrop, a common set of legal challenges consistently arise in relation to AI, and the technology’s wide-ranging impacts of are driving a shift to a ‘new world’ of contractual considerations and drafting approaches. The team discussed some contractual areas which may need to be revisited to account for AI:

  • Service Level Regimes (SLRs). Previously SLRs were designed to account for human errors, which are often low grade and easily identifiable. With an AI solution, errors can scale rapidly and may go unnoticed by the system itself, necessitating a re-evaluation of SLRs and the consequences of any breach, and thought as to the appropriate human oversight of AI outputs.
  • Audit Rights. Careful drafting is needed to balance customers’ need (both for good governance and compliance with the EU AI Act) to ensure the traceability and explainability of an AI solutions’ decision making against suppliers’ legitimate concerns around disclosing the proprietary or third-party used data to train the model.  
  • Intellectual Property (IP). Clarity is needed around the ownership of the various elements of an AI solution.  While the supplier will typically retain IP rights in its background AI model, if that model is then developed and trained on customer data the customer will require either ownership of that ‘customer model’ or an ongoing licence to use the model after the supplier has exited the engagement.  
  • Personal Data. Suppliers may seek a right to continue using anonymised and aggregated personal data from customers to train AI models and enhance services even after the contract has come to an end. However, anonymisation and aggregation are themselves acts of processing for which there may not be a lawful basis under the GDPR after the expiry of the contract.  Customers must determine whether they can agree to such clauses without breaching GDPR, while suppliers must decide whether they will require customers to confirm they have obtained necessary permissions from data subjects for such processing or remove these clauses from the contract and cease any post-contract processing.
  • Liability Caps. Given the high level of fines which can be imposed under the EU AI Act, there is a push for AI-related liabilities to be either unlimited or subject to a super cap, drawing parallels to the impact of the GDPR on liability caps. The market has not yet settled on whether AI and data protection claims should be subject to separate or combined super caps, and whether the cap limit needs to be raised. This remains a topic for commercial negotiation.
  • Warranties. Warranties need to be adjusted to comply with both commercial concerns and regulatory requirements. In the EU, customers will want the supplier to warrant that the AI solution or the services provided comply with the various requirements of the EU AI Act.
  • Jurisdiction. The choice of governing law will influence how the contract addresses the use of AI in the engagement. It is important to consider the background law carefully, irrespective of what has been drafted in the contract, to ensure that the contract is enforceable and compliant with relevant regulations across different jurisdictions.

Conclusion

The AI landscape is rapidly evolving, presenting exciting opportunities for businesses. While the drive to harness AI’s full potential is strong, it’s crucial to remain aware of the legal and regulatory challenges. Effective governance is essential, encompassing both the external governance of supplier relationships and contracts, and the internal governance of AI usage within the organisation.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© DLA Piper

Written by:

DLA Piper
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

DLA Piper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide