Orrick’s John Bautista sat down with Rune Kvist, founder and CEO of Artificial Intelligence Underwriting Company, to explore accountability in AI and how creating auditable standards and insurance products could unlock faster, safer adoption across high-stakes industries. Learn about:
- The multi-trillion-dollar “AI agent” market opportunity
- Why leading companies face a dilemma between rapid AI adoption and reputational risk
- How AIUC is building trust in AI agents through certification and insurance products
Rune: AI companies can say, “We’re going to do the work for you. But as soon as you do that, there’s a question of who’s going to take the responsibility.”
John: My name is John Bautista. I’m a partner at Orrick in our Tech Companies Group, and here today with Rune Kvist, who’s the founder of Artificial Intelligence Underwriting Company, one of our new clients. Pleasure to be with you today.
Rune: Great to be here!
John: Rune, it would be great to hear about your background and why you set up on the path to start a new company in establishing standards for AI.
Rune: Absolutely, so I started working on AI back a little bit before ChatGPT came out when I joined Anthropic. And one of the things that was obvious already then was that this was just going to be the most transformative technology that humanity has ever seen.
Industry leaders are actually facing a hard choice when it comes to AI. On the one hand, they risk that their competitors are going to adopt this technology and make them irrelevant. But if they then move fast themselves, they risk making headlines for the wrong reasons. You might have seen some examples like hallucinated refund policies or Google’s Nazi propaganda.
John: Up to this point, are there other companies that are focused on AI standardization, or do you believe that you’re the first in the market?
Rune: We’re really the first in the market to put a real foot forward for something, for a standard that’s precise enough that you can go and audit companies against it and give them certificates.
The core problem we’re solving is can we create confidence between the AI agent companies and their customers? And so we have two things that sit between them. One is the certificate, and the other one is an insurance product.
When you read through the history of how standards have emerged and the ones have been really effective, you often see that it’s insurance companies that take the lead because they’re the ones that pick up the bill when things go wrong.
John: And how big a market do you think this is?
Rune: I think most people at this point are starting to converge on the fact that the AI agent opportunity itself might be the biggest economic opportunity we’ve ever seen. So people throw out numbers like the US labor market is $18 trillion, and this could be at least as big.
And so the question is, what’s the value of enabling that to happen much faster and in more depth and in kind of high-stakes contexts, such as financial services, health, etc.? We think that opportunity is enormous.
John: How are you planning to evolve the standard over time as risks change and as standards change and laws and regulations change over time?
Rune: The No. 1 thing that a standard for AI must get right is to keep up with the AI capabilities and the risks. AI is brand new and it’s breaking a lot of the existing security paradigms. This technology is moving really, really fast. So we must always be ahead of the curve. Where standards in the security world might update on a five to 10-year cycle, we plan on making tweaks to the standard every quarter as we get input. So really it’s staying close to the people who are using the standard, the real things that are happening on the ground, and iterating fast.
John: Great. I want to thank you, Rune, for joining us at Orrick’s San Francisco office. We’re thrilled to be working with you.
Rune: My pleasure, thank you.
[View source.]