As legal and business professionals focused on protecting intellectual property—especially those of us in business litigation—it's crucial to stay attuned to the expanding influence of generative AI (“GenAI”). While patents and copyrights traditionally shield human-driven innovation and creativity, GenAI presents a different challenge. Because U.S. law currently requires a human inventor or author for patent and copyright protections, businesses must look to trade secret law as a potential safeguard for AI-related assets.
But, as always, not every piece of confidential information qualifies as a trade secret. To be protected, information must not be generally known or readily accessible, must provide its owner with a competitive advantage, and must be subject to reasonable efforts to maintain its secrecy.
Where Does GenAI Fit Within Trade Secret Law?
Broadly defined, GenAI uses machine learning models trained on vast datasets to generate text, images, audio, and code. Unlike traditional AI, which analyzes data and predicts outcomes, GenAI actively creates new content. However, the very capabilities that make GenAI so powerful—data mining, pattern recognition, and predictive algorithms—also enable it to uncover or infer confidential business strategies, research projects, pricing models, client lists, and more, even when companies take significant steps to safeguard this information.
Naturally, this raises critical questions: Can GenAI itself be stolen or misappropriated? Can it be reverse engineered? And are the courts ready to address these complex issues?
The OpenEvidence Case: A First Test for GenAI Protection
Until recently, allegations of direct GenAI theft had not yet been tested in court. That changed with the filing of OpenEvidence, Inc. v. Pathway Medical, Inc. and Louis Mullie on February 26, 2025, in the U.S. District Court for the District of Massachusetts (Case No. 1:25-cv-10471-MJJ).
OpenEvidence, a U.S.-based company, offers a GenAI platform that provides evidence-based clinical decision support to medical professionals by synthesizing peer-reviewed medical research. The defendants—Pathway Medical, a Canadian company, and its Chief Medical Officer, Louis Mullie—are accused of gaining unauthorized access to OpenEvidence’s restricted GenAI platform by falsely posing as another licensed medical practitioner.
According to the complaint, the defendants allegedly used "prompt injection attacks"—a form of cyberattack where malicious prompts trick an AI system into ignoring its safeguards—to extract sensitive and proprietary system information. Specifically, OpenEvidence claims the defendants targeted its “prompt,” “full prompt,” “system prompt,” and “instruction” data, critical components that dictate how the large language model (“LLM”) operates. These system prompts, the complaint asserts, represent some of the company's most proprietary and valuable intellectual property.
Further, OpenEvidence alleges that Mullie registered under a false identity, using stolen National Provider Identifier credentials, to access more advanced versions of its platform—versions that deliver more sophisticated and clinically relevant outputs than the public-facing model.
The claims against the defendants include breach of contract (via violation of OpenEvidence’s Terms of Use), misappropriation of trade secrets under the Defend Trade Secrets Act, violations of the Computer Fraud and Abuse Act and Digital Millennium Copyright Act, and unfair competition and unfair or deceptive acts in the conduct of trade or commerce.
As of now, the defendants have not filed a response.
Why the OpenEvidence Case Matters
This lawsuit is groundbreaking because no court has yet ruled on a case involving prompt injection attacks or their implications for GenAI systems. The facts hint that reverse engineering of GenAI platforms is feasible—raising urgent questions about how courts will treat such activities under existing IP frameworks.
Key questions include:
- What elements of a GenAI system are protectable trade secrets?
- Are system prompts, algorithms, or other internal instructions independently protectable, or are they too interconnected to separate?
- Do prompt injection attacks amount to impermissible misappropriation—or are they merely a new form of reverse engineering?
Litigators love a good fight, but they also want case precedent. OpenEvidence may offer courts the opportunity to clarify whether traditional trade secret and unfair competition laws are sufficient to address these issues—or whether new doctrines must emerge.
Looking Ahead: Protecting Your GenAI Assets
Regardless of how OpenEvidence unfolds, GenAI presents ongoing challenges for IP protection. That said, familiar best practices still apply:
- Use strong contractual agreements with restrictive covenants to safeguard trade secrets.
- Limit access to critical technologies to only those employees who truly need it.
- Implement technical safeguards and Terms of Use agreements that prohibit unauthorized scraping or copying of proprietary systems.
As litigation around GenAI continues to develop, staying vigilant is essential. We will be closely monitoring the OpenEvidence case—and others like it—and will keep you informed as courts begin to shape the legal landscape for GenAI protections.
[View source.]