This article is based on a presentation at Womble Bond Dickinson’s AI Intensive: Playbook for Innovation and Risk Mitigation virtual summit, which took place May 20, 2025, along with additional insights from the authors. Contributors included Unilever Privacy Director—Americas Lewis Borg, Womble Bond Dickinson (UK) Partner Caroline Churchill, and General Counsel Nipro Holdings Americas Stacy Hampel. Womble Bond Dickinson (US) Partner Tara Cho moderated the discussion.
Why do companies need AI governance policies and procedures? In part because the legal landscape around AI is so unsettled.
In Europe, the EU AI Act gives more certainty about regulation. But no centralized federal law governing AI use exists in the U.S. Instead, Cho said a scattershot of laws exists across states. Some focus on AI deployment while others target AI usage in certain industries, such as healthcare. Still others target specific AI activities. There currently are hundreds of proposed AI bills in state legislatures across the country.
Plus, the U.S. experienced a change in Presidential administrations in 2025, which upended how AI is regulated at the agency level. “There are different philosophies on enforcement across federal agencies,” Cho said. “The FTC, for example, is emphatic that we have to ‘beat China’ on AI, but they say they are still going to enforce unfair and deceptive practices and AI washing. There’s a strong push for innovation.”
In May 2025, the U.S. House of Representatives passed a 10-year moratorium on AI regulation at the state level. But many state Attorneys General opposed this plan, and ultimately it was cut by the Senate in its budget bill.
All these elements combine for a fast-moving, rapidly evolving landscape for companies to navigate.
In addition, Churchill notes that AI governance is needed to mitigate risk, as companies face numerous AI-related challenges that could undermine their operations and credibility. Key risks include biases in data and AI systems, which can lead to unfair or inaccurate outcomes, and the proliferation of disinformation that compromises the validity of outputs. AI hallucinations, or false responses generated by algorithms, pose another significant threat to reliability. Security vulnerabilities further expose sensitive information, increasing the risk of data breaches and misuse.
Without proper AI governance safeguards, intellectual property can be at risk of theft. Insufficient attention to data quality or synthetic datasets might result in unreliable insights. Addressing these risks is essential to ensure that data practices meet ethical, legal, and societal expectations.
The Need for an AI Governance Program
A March 2025 McKinsey & Company report found that organizations increasingly see gen AI’s effects on revenues in the business units using the technologies. Most of those companies understand that governance is needed to ensure AI is used responsibly and in a manner that furthers the organization’s objectives.
Even among organizations not yet using AI, 30% reported working on AI governance, according to a recent International Association of Privacy Professionals (IAPP) study. Perhaps this reveals a prevailing "governance first" prioritization of ensuring good governance is in place before AI use.
Churchill commented, “Having a clear data strategy is key, particularly as regulation from an EU perspective under its AI Act is focusing on the use of general-purpose AI. The [Act's] rules on general purpose AI apply from 2 August 2025 and task providers with additional obligations not only to regulatory authorities and the EU's AI Office, but also downstream to providers seeking to integrate general-purpose AI models into their AI systems.”
The EU's General-Purpose AI Code of Practice was published on 10 July 2025 and whilst voluntary, it does offer a gateway for a provider of generative AI to demonstrate governance. Churchill adds “As most corporate use cases for AI involve general-purpose AI, hopefully the Code will aid compliance because the rules on this are some of the most demanding under the Act.”
Global companies use AI across many areas of operation, Borg said. These businesses have been building an AI compliance and ethical use program for a while now.
“It gives them a solid global foundation to build on,” he said. “They can adjust that to specific things like the EU AI Act, but the aim is to standardize wherever possible.”
Hampel said, “I agree there needs to be standardization across the company if you truly are going to scale it globally. Part of that is harvesting where AI is being used in your organization and reviewed on a regular basis—it’s crucial to understand where AI is in your organization.”
The Need to Standardize Employee AI Usage
The less AI governance a company provides, the more likely employees are to take matters into their own hands and use AI systems on their own. But this “Shadow IT” approach increases risks for organizations. “We don’t even know it’s happening in an organization, nor do we know if IP and personal data are protected,” Cho said. She pointed to the increased cybersecurity risk of using unauthorized AI. But there’s a carrot to compliance– if team members come through the right channels, they can maximize the benefit of AI use. In other words, approved AI systems yield better results with fewer problems.
Accordingly, a policy that requires all AI use to be through a company portal makes sense. There should also be a way to develop a shared library of AI prompts and use cases. Borg said that AI compliance should be as user-friendly as possible. AI compliance needs to evolve with AI usage. In-house teams can help, not only in helping design AI governance plans but also in promoting compliance within the organization.
“Visibility is important, and is the right place to start,” he said.” “It will vary by business, but if you want to do AI responsibly, you must curate the right forums internally with stakeholders across teams, including Marketing, HR, IT, and Legal. There’s a huge role for privacy teams. Bringing in AI compliance is a natural evolution.”
“We have people in those departments that are highly skilled, having gone through training when GDPR came out,” Hampel said. “I’m a fan of rebranding the Privacy Office as the Data Governance Office.”
Hampel said a Data Governance Office could report to cross-functional stakeholders on a quarterly basis. Such a report could include 1. Purchased AI (collaborating with IT and knowledge management) and 2. Development of AI systems (with safety & quality teams).
“I was shocked at how many AI tools were being used in development. All companies are software companies, even if they don’t realize it,” Hampel said. This points to the need to have safety and quality control in place. She said, “Those software tools are open source software.” This makes it easy for IP and confidential info to slip into AI systems outside of the company’s control.
Churchill said, “Trust and accountability are particularly key, and underpin everything.”
She continued, saying, “Much like the UK and EU GDPR in protecting the fundamental rights and freedoms of data subjects by transparency, accountability and governance, these principles are foundational to AI. Being transparent fosters trust. It is important to show that the AI system behaves predictably and reliably. Trust directly impacts AI adoption. You won’t use something you don’t trust. Transparency and accountability go hand in hand with this."
“It’s important to see there is a clear chain of responsibility, ethical standards and that privacy controls are baked in,” Churchill added. "The system is explainable with a ‘human in the loop’ and there is open communication on what it does and its limitations, how it was trained, and how it will be monitored and improved.”
No matter what, we’re at a point where the business value proposition is undeniable,” Cho said. U.S. companies will invest hundreds of billions of dollars in AI development this year. “There’s more engagement with AI tools than ever. We have to contain and control the risk, while enabling the utilization of AI.”
Managing Third-Party AI Solutions
Many companies look to off-the-shelf AI solutions. But users often find that AI software as a service (SaaS) can be expensive and time consuming. So how do companies manage third-party tools that are supposed to make things easy?
Hampel said finding the right third-party fit can be challenging. Therefore, larger organizations often look to home-grown processes and tools. “If you have the internal resources to do that, it is more cost-effective,” she said.
Cho said that if a company decides to go the third-party route, they should get a direct contact with the third party who can take them through the demo and answer any questions.
The Rise of Agentic AI
Agentic AI is a developing area of technology in which AI is designed to make autonomous decisions with limited human supervision. The panel expressed serious reservations about the use of agentic AI in the workplace.
“I, as a user, am hesitant to give up control,” Cho said.
Hampel said larger organizations are more risk averse and haven’t been as quick to adopt this technology. But discussions have centered around brand management and aligning with corporate values, as well as the human factor. “You have to make sure employees don’t feel like they’ve lost autonomy,” she said.
“It’s a quite profound change—the next big wave in AI,” Borg said, noting agentic AI carries risks as well as opportunities. “It makes you rethink where human oversight might enter into the process.”
For example, who is legally responsible if agentic AI makes a decision that causes damage (i.e. allows unauthorized access to a building)? Womble Bond Dickinson Partner Mark Henriques said that as a litigation attorney, he would look at whether business owner used reasonable care to choose a quality AI program. If so, the liability may rest with vendor. “We see that in industrial litigation now. It will depend on the facts in the specific case,” he said.
In conclusion, building an AI compliance program that works is no longer optional—it’s a strategic imperative. The evolving regulatory landscape, coupled with the risks and ethical considerations intrinsic to AI, highlights the need for proactive governance.
Companies must prioritize creating standardized policies, fostering cross-departmental collaboration, and ensuring transparency and accountability in AI deployment. By addressing challenges such as biases, security vulnerabilities, and the complexities of third-party integrations, organizations can harness AI’s immense potential while safeguarding against its pitfalls. Ultimately, a strong AI governance framework not only mitigates risks but empowers businesses to innovate responsibly and maintain trust in a rapidly advancing technological era.
Key Takeaways: AI Governance Playbook
- Establish a Strong Foundation. Define AI ethics principles and form a cross-functional AI governance committee (e.g., IT, data security, marketing, privacy, legal). Set clear objectives and build consensus. Ensure collaboration to create a robust AI governance program suited to your business.
- Develop Robust Frameworks and Policies. Create clear guidelines for the development, deployment, and use of AI systems. Implement a systematic approach to identifying, assessing, and mitigating risks. Focus on regulatory compliance. Address bias and fairness concerns.
- Implement and Operationalize Governance. Promote transparency, define roles, and conduct continuous monitoring. Educate employees on proper AI usage. Integrate AI governance into existing GRC frameworks and adopt automation tools for compliance tracking and risk detection.
- Foster a Culture of Responsible AI. Secure leadership buy-in and encourage collaboration across functions. Regularly update the AI governance framework and maintain a dynamic inventory of licensed and developed AI to ensure ongoing oversight.
[View source.]