Artificial intelligence (AI) is transforming industries across the board, and the legal field is no exception. Clients today are increasingly turning to AI tools like ChatGPT, Google’s Gemini, and other generative platforms to assist with legal research or draft documents before bringing them to their attorneys. At first glance, this seems like a great way to save time and money. But for many clients, it’s having the opposite effect: AI is increasing their legal fees.
The Problem with Biased Prompts
When clients use AI for legal research, they often begin with a specific outcome in mind. They phrase prompts in a way that reflects what they want to be true rather than neutrally asking what the law is. This creates a significant problem: AI tools tend to tell users what they want to hear. The models are designed to be helpful, and they often provide confident sounding but legally inaccurate or misleading responses tailored to the assumptions in the prompt.
Clients then bring this AI-generated “research” to their attorneys, expecting confirmation or minor refinement. Instead, the attorney must analyze every argument and fact-check every cited case—often only to discover the argument is unsupportable and that the cases either do not exist (a phenomenon known as “hallucination”) or do not say what the AI claims they say. The attorney then must explain to the client why the research is flawed, provide the correct analysis, and often spend more time than if they had been consulted from the start.
The Illusion of a Drafted Contract
Another common scenario involves clients drafting contracts with the help of AI. While AI tools can produce passable-looking agreements, they often miss critical legal terms, lack jurisdiction-specific clauses, or use outdated or unenforceable language. Sometimes the AI will copy contract structures from publicly available templates without accounting for the specifics of the client’s business, applicable laws, or even internal consistency.
When these AI-generated contracts are brought to an attorney, what could have been a simple matter of tailoring an existing firm-approved template becomes a full redraft. The result? More billable hours than if the attorney had handled the drafting from the outset.
What AI Still Can’t Do Well
AI is an impressive tool. It can summarize case law, generate idea outlines, and explain basic legal concepts in plain English. But it still struggles with:
- Reliable legal research – AI often fails to apply legal standards correctly, and its citation practices can be highly unreliable.
- Jurisdictional nuance – It rarely accounts for state-specific laws, procedural quirks, or local court preferences.
- Practical judgment – AI can’t weigh the risks, strategic considerations, or client-specific factors that are central to effective lawyering.
How To Use AI Responsibly
Many law firms are using AI to help with a wide variety of tasks that do save their clients’ money. Law-specific software integrates AI to make research, drafting, and e-discovery far more efficient. There are also ways for clients to use AI in the legal context. When used thoughtfully, AI can be a helpful brainstorming or educational tool. But here are a few tips to avoid inflating your legal bill:
- Talk to your attorney first. You’ll save time and money by starting with the right legal framework rather than backtracking.
- Use AI for learning, not lawyering. Let AI explain basic concepts or help you understand legal terminology—but leave the application of law to your attorney.
- Don’t treat AI output as legal advice. It’s a starting point, not a conclusion.
- Avoid relying on AI-generated citations. Always ask your attorney to verify any authority.
Conclusion
AI is powerful, but it’s not a substitute for legal expertise. In many cases, it’s like asking for directions from someone who’s never been to your destination—they may sound confident (and even persuasive), but you’ll still end up lost. If your goal is to reduce legal fees and get the best possible outcome, involve your attorney early. The cost of correcting AI’s mistakes may be higher than the cost of avoiding them altogether.