Defensibility Considerations With AI: Best and Worst-Case Scenarios

Epiq
Contact

Epiq

[authors: Lilith Bat-Leah, Ronald J. Hedges]

The use of AI in eDiscovery introduces opportunities and challenges, particularly when it comes to defensibility. Legal practitioners must be prepared to justify their use of AI tools under both ideal and adversarial conditions. Analyzing best and worst-case scenarios provides a framework for reviewing AI defensibility. In favorable circumstances, AI is considered no different from human reviewers, allowing for efficient and confidential workflows. In more contentious situations, the producing party may face scrutiny and demands for transparency. It is crucial to consider what practical cooperation looks like between parties, emphasizing the importance of reasonableness, transparency, and adherence to established principles.

What All eDiscovery Workflows Have in Common

Meet-and-confer discussions are a requirement in litigation and are essential to ensuring that both sides understand the parameters of discovery and raise concerns earlier on.

Discussions should cover the sources and formats of electronically stored information (ESI), methods for data preservation and collection, and protocols for search and review. Proactively addressing these topics helps ensure that discovery is conducted in a manner that is reasonable, proportional, and defensible, while also minimizing the risk of disputes or sanctions later in the case.

Another commonality is the potential for mistakes in manual or technology-assisted review (TAR). Risks necessitate quality control measures and documentation. The presence of mistakes does not undermine the defensibility of a workflow; what matters is whether the overall process was reasonable and proportional.

Precision, recall, and elusion rates are applicable across these methodologies. Consistency allows courts and parties to assess a review process without bias toward the technology used. The key is transparency and the ability to demonstrate thoughtful design and execution.

Best Case Scenario: AI Facilitated Review Held to the Same Standard As Human Review

In the best-case scenario, AI review has the same level of trust and scrutiny as traditional human review. This approach aligns with Sedona Principle 6, which states that the producing party is best situated to determine the appropriate technologies and methods for its production.

There is no requirement to disclose evaluation metrics such as precision and recall, although calculating them is recommended. Just as with human review, the absence of shared metrics does not imply a lack of rigor, only that the producing party is not compelled to expose its internal quality control measures unless a deficiency is alleged.

However, use of AI for the purposes of searching documents and designating them for preservation is risky, whether disclosed to opposing counsel or not. AI may prove to be useful in the identification of custodians and data stores subject to legal hold, so long as no filtering beyond date ranges is applied. If, for any reason, there is a loss of ESI that should have been subject to a legal hold, then the producing party must demonstrate that it took reasonable steps to avoid loss.

Worst Case Scenario: The Use of AI Is Scrutinized in Every Possible Way

In the worst-case scenario, the opposing party challenges production adequacy and demands full transparency into the AI review process. This includes requests to disclose the prompts used to guide AI under the argument that these prompts influence the scope and nature of the review. While this level of scrutiny is uncommon, it may arise in high-stakes litigation or when there is a history of discovery disputes.

The producing party may also face demands to disclose every step of the workflow, including preprocessing, filtering, and post-review validation. The burden of such disclosure can impact timelines and risk potential exposure of privileged information.

Another demand is access to “the AI reasoning or decision-making process.” While this may be infeasible given the opaque nature of large language models (LLMs), many tools log the chain of thought “reasoning” output by the underlying LLM in use. Ultimately, courts may balance the requesting party’s need for transparency with the producing party’s right to maintain confidentiality over its tools and methods.

Opposing counsel could also argue that AI may have biases that skew review results. In such cases, the producing party may need to demonstrate that steps were taken to assess and mitigate bias, even if such evaluations are not typically required in human review.

While these demands can be burdensome, they underscore the importance of maintaining thorough documentation and being prepared to defend the process if challenged.

Throughout every step of discovery, it is critical to maintain audit trails that document every action taken. If questions arise about the integrity of the process, a well-maintained audit trail provides evidence that appropriate procedures were followed.

Evaluating With Established Metrics

Evaluating the review process in eDiscovery requires applying established metrics to ensure it is effective and defensible. Three of the most critical metrics are unbiased estimates of recall, precision, and elusion for the review population. These three metrics should be used to evaluate any review workflow, whether technology-assisted or otherwise.

In Practice: What Does Cooperation Look Like?

Cooperation between parties is essential to avoid disputes and delays. For the producing party, this means being transparent about the use of LLM technology. Sharing high-level information about the technology used and the evaluation metrics obtained can help build trust and avoid conflict.

At the same time, the requesting party must act reasonably. Just as it would be inappropriate to demand the opposing party’s review protocol or seed set in a traditional TAR workflow, it is equally unreasonable to demand exhaustive details about an LLM technology review process without a specific basis for concern. Cooperation is critical to recognizing the balance between transparency and strategic confidentiality.

As the legal community adapts to AI, a shared commitment to fairness, efficiency, and defensibility will be critical in shaping the future of eDiscovery. Ultimately, defensibility in eDiscovery, whether using AI or not, rests on principles of reasonableness, proportionality, and good faith.

The original, full version of this blog is on ACEDS’ website and can be viewed here.

[View source.]

Written by:

Epiq
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Epiq on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide