Kilpatrick’s Joe Petersen, a partner with more than two decades of experience representing a broad array of clients in litigation, arbitration, and administrative proceedings involving copyright and trademark law, recently joined other firm thought leaders to discuss “From Copyright to Patents: Global IP and Legal Issues in GenAI Innovations” at the 21st annual KTIPS (Kilpatrick Townsend Intellectual Property Seminar).
Speakers examined a broad array of topics, beginning with the latest rulings and Copyright Office guidance that are rapidly reshaping copyright’s fair-use doctrine, and the evolving recognition of AI as an inventor, followed by subject matter eligibility for AI-driven inventions and best practices for enablement and disclosure. The session provided a practical overview of how major patent offices in the U.S., China, Japan, and Europe are addressing the most pressing legal issues at the intersection of GenAI and IP.
Joe provides these key takeaways from the discussion:
1. Fair-use rules for training data are unsettled but taking shape. Courts and the Copyright Office are properly recognizing the transformative nature of these tools but market dilution as a theory of cognizable market harm is very much a wild card.
2. So far, claims over model outputs are falling flat—when safeguards work. In Bartz v. Anthropic and Kadrey v. Meta, courts found that no plaintiff-owned text actually reached users and the models’ guardrails properly limited verbatim passages.
3. Regulators on both sides of the Atlantic are moving quickly. Congress is debating the AI Accountability & Personal Data Protection Act, the NO FAKES Act, and the TRAIN Act, while the EU AI Act already mandates model classification and detailed training-data documentation—requirements that could slow roll-outs if businesses are unprepared.
4. Human authorship is still the bedrock of copyright in the U.S. and most of the world. The U.S. Copyright Office reiterates that generative AI is merely a tool; protection attaches only when a person contributes original, creative choices—time, expense, and effort alone do not suffice.
5. Practical next steps: model developers should continue to work to make sure that guardrails prevent infringing output and users of these models should meticulously document human contributions to AI-assisted works, and keep watch on evolving fair-use and market-impact standards to adjust licensing and compliance strategies in real time.