[co-author: Ariana Tagavi*]
Missed Anthropic Perspectives & Mixed AI Meta-Phors Cloud Copyright Law

The evolution of generative artificial intelligence has prompted courts in two highly-publicized recent federal district court decisions to apply copyright law’s doctrine of fair use to the “training” and output of generative AI systems. We will discuss those two cases—Kadrey v. Meta Platforms, Inc. and Bartz v. Anthropic PBC—in further detail below to illustrate the evolving legal issues surrounding this emerging technology. In addition to addressing AI-focused issues, these rulings revisit, and seem to reinterpret, copyright’s fair use doctrine in a manner displaying two shortcomings to our way of thinking:
- First, the opinions appear to conflict with the Supreme Court’s nuanced analysis in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith, particularly in their treatment of the first factor of fair use: the purpose and character of the use, and this piece will explore those inconsistencies, concluding that these AI decisions misapply, or insufficiently engage with, Warhol’s guidance on “transformativeness” and market substitution.
- Second, these opinions also deal inconsistently, and ultimately unpersuasively for us, with the concept of “copying,” and the “learning” and “training” metaphors, used to describe how large language models (LLMs) are created and then work, leaving largely unexplored in the AI context the existing body of law under the doctrine of non-literal infringement, which prohibits unauthorized reproduction of protected expression beyond exact copying, as seen the Second Circuit decision in Castle Rock Entm’t, Inc. v. Carol Publ’g Grp., Inc. and in other cases.
Let’s turn to those issues now.
I. Did Meta and Anthropic Misapply, Or Incompletely Apply, Warhol?
A. The Fair Use Framework Warhol v. Goldsmith Required
In Warhol, the Supreme Court clarified that the first fair use factor—“the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes”—must consider both whether the new use is transformative and whether it competes with the original work’s market. The Supreme Court rejected an indulgent definition of what is included in “transformative use” because:
an overbroad concept of transformative use, one that includes any further purpose, or any different character, would narrow the copyright owner’s exclusive right to create derivative works. To preserve that right, the degree of transformation required to make “transformative” use of an original must go beyond that required to qualify as a derivative…In sum, the first fair use factor considers whether the use of a copyrighted work has a further purpose or different character, which is a matter of degree, and the degree of difference must be balanced against the commercial nature of the use. If an original work and a secondary use share the same or highly similar purposes, and the secondary use is of a commercial nature, the first factor is likely to weigh against fair use, absent some other justification for copying.
[143 S.Ct. 1258, 1275-1277 (2023)].
The Warhol majority emphasized that a use is not “transformative” simply because it adds new expression or meaning. Id. at 1273 (“Although new expression, meaning, or message may be relevant to whether a copying use has a sufficiently distinct purpose or character, it is not, without more, dispositive of the first factor.”) Rather, it must do so in a way that does not usurp the original’s market. Id. at 1277. Warhol’s commercial licensing of Prince Series images for use in a magazine cover was not sufficiently transformative because it served the same purpose as Goldsmith’s original photograph: to illustrate an article about Prince. Id. at 1273, 1280, 1287.
The Warhol clarification thus restrains the scope of fair use, especially for commercial entities, by focusing on whether the new use substitutes for the original or targets a different audience or market. “If an original work and a secondary use share the same or highly similar purposes, and the secondary use is of a commercial nature, the first factor is likely to weigh against fair use, absent some other justification for copying.” Id. at 1277.
- AI “Training” and Output: The District Court Approach in Meta and Anthropic
In both Meta and Anthropic, the plaintiffs—authors of copyrighted works—alleged that their works were copied and ingested into LLMs without authorization, and that the outputs of those models either directly or derivatively relied on the original copyrighted content. The defendants moved to dismiss, arguing that ingestion of data for “training” constitutes fair use and that model outputs are not substantially similar to the plaintiffs’ works.
Both courts, in large part, declined to rule definitively on fair use at the motion-to-dismiss stage. Meta, at 16; Anthropic, at 8. Each decision, however, notably emphasized its conclusion that using copyrighted works to “train” AI models is “transformative” in nature. In Meta, Judge Chhabria accepted that AI “training” was “transformative” because it repurposes the text for the function of machine learning—a different purpose from entertainment or literary consumption. Meta, at 16 (“There is no serious question that Meta’s use of plaintiffs’ books had a ‘further purpose’ and ‘different character’ than the books—that it was highly transformative.”) Similarly, in Anthropic, Judge Alsup suggested that using works to develop a general-purpose LLM amounts to fair use because the models are not used to reproduce or market the works in their original form. Id., at 12 (“Anthropic’s LLMs have not reproduced to the public a given work’s creative elements”).
C. Points of Inconsistency Between These Recent AI Decisions and Warhol
1. Misapplication of “Transformativeness” Doctrine
In Warhol, the Supreme Court explicitly, and repeatedly, rejected the notion that simply placing a work in a new context or using for a new technological purpose automatically renders the new work transformative. Warhol, 143 S.Ct. at 1275, 1277, 1280, 1283, and 1287. Yet in Meta and Anthropic, the courts appeared to accept that using copyrighted works to “train” an AI system serves a new purpose (i.e., creating predictive language models) and is thus transformative—without a rigorous inquiry into the market effects or whether the use actually alters the expressive content or message of the original works. Meta, at 3, 16; Anthropic, at 9, 30. This line of reasoning seems to accept that a new technological purpose can itself be sufficient to render a use transformative. This reasoning, or lack of it, revives the overly broad understanding of “transformativeness” that Warhol was meant to narrow. A general-purpose LLM may ultimately generate outputs unlike the original works, but the ingestion and use of verbatim copyrighted material cannot by itself constitute transformative use under Warhol’s standard, as we read it. Indeed, in Warhol, the Supreme Court insisted that courts look beyond purpose alone and assess whether the new use competes in the same market as the original. Warhol, 143 S.Ct. at 1273, 1280, and 1287. Only engaging in that required analysis avoids, in the Supreme Court’s words, an “overbroad concept of transformative use.” Id. at 1275. Meta and Anthropic do not complete that analysis satisfactorily.
2. Neglect of Market Harm Analysis
The Warhol decision emphasized that the effect on the potential market for the original is crucial—particularly when the use is commercial. 143 S.Ct. at 1273-80, 1283, 1287. By contrast, both Meta and Anthropic minimize or postpone analysis of market harm. These courts refrained from seriously examining whether the use of literary works to “train” commercial AI systems substitutes for the authors’ rights to license their works, whether to AI companies or others seeking to create derivative works. This brush-off is significant because Warhol directs lower courts to weigh market substitution heavily, especially when the new use is done for profit. 143 S.Ct. at 1273-80, 1283, 1287. Moreover, in a world where licensing copyrighted content for generative AI “training” is becoming increasingly feasible and common (as seen in deals by OpenAI and Google), downplaying or short-shrifting the market harm question arguably undermines the balance Warhol tried to restore between transformative innovation and protection of creative labor.
3. Premature Dismissals Without Applying Warhol-Guided Balancing
Though both Meta and Anthropic are early procedural decisions that allow some claims to proceed, the willingness of these courts to accept transformative purpose arguments without engaging in the deeper balancing test prescribed in Warhol suggests a departure from the more cautious and holistic approach the Supreme Court had urged for fair use. The broad assertions of transformative use based on technological novelty and new functionality run counter to Warhol’s insistence on rigorous market-based scrutiny (an insistence that was also part of the Supreme Court’s approach in Jack Daniel’s Properties, Inc. v. VIP Products, LLC, 143 S. Ct. 1578, 1588 (2023), the so-called Bad Spaniels case, to analogous issues in trademark law, about which one of use has also written before here and here). In our view, dismissing or downplaying plaintiffs’ fair use concerns by treating AI “training” as sui generis may create a legal double standard that favors technologically complex defendants over traditional copyright holders.
D. Some Additional Thoughts On Warhol
While Meta and Anthropic are not final determinations on fair use, their reasoning reveals a potential drift away from the Supreme Court’s recalibration in Warhol v. Goldsmith. Courts must be careful not to exempt powerful technologies from the same fair use constraints that apply to human artists, particularly when those technologies derive commercial benefit from copyrighted materials without compensation or consent. Cf., e.g., Google LLC v. Oracle America, Inc., 141 S. Ct. 1183, 1199 (2021) (“Just as fair use distinguishes among books and films, which are indisputably subjects of copyright, so too must it draw lines among computer programs. And just as fair use takes account of the market in which scripts and paintings are bought and sold, so too must it consider the realities of how technological works are created and disseminated. We do not believe that an approach close to ‘all or nothing’ would be faithful to the Copyright Act’s overall design.”) A faithful application of Warhol requires courts to look beyond the gloss of innovation and consider the deeper structural impacts on creative markets—and on the authors whose works are fueling the AI revolution.
II. Did Meta and Anthropic Overindulge The “Training/Learning” Conceits & Underapply The Law Of Non-Literal Infringement
Understanding the issues addressed in Meta and Anthropic requires understanding technology, language, and law, and the limits of each. While each court did yeomen’s work on the issues before it, each may have fallen a bit short, as each court used phrasing, analogies, examples, and metaphors that sometimes obscured more than they clarified. The hope here is that our explanation can sort some of this out.
- Given How LLMs Actually Work, Is The Language Used In These Opinions (And In Lots Of Other Places) Helpful Or Not?
References in judicial opinions like Meta and Anthropic to “copying,” AI “training” and LLMs “learning” are, at best, metaphors—useful for surface-level understanding but fundamentally misleading when used to justify substantive legal conclusions. These terms borrow the language of human cognition, but LLMs do not “learn” in any meaningful biological or intellectual sense. Instead, what LLMs actually do is compute and compress massive statistical patterns across vast bodies of text data.
Here’s why those metaphors break down under scrutiny:
1. LLMs Do Not Store Copies Of Works
At the heart of copyright infringement claims is generally the notion that someone copied or used something without right, license, or defense. Certainly, copying occurs during so-called “training” of LLMs. But LLMs then store information through distributed encoding of patterns of works that the LLM has ingested, not by direct copies of such works or by again and again accessing such copies during the prompting stage (also known as “inference”). Thus, the idea that an LLM “copies” a particular work, especially during inference, needs to be clarified, because the process is not copying in the traditional or human-readable sense. Instead, it involves mathematical abstraction of patterns and correlations. Still, the question is important in copyright analysis, especially with regard to whether “training” and generation involve “copying” under the law.
2. LLMs Do Not Learn Like Humans—They Optimize
In human terms, “learning” implies the acquisition of knowledge, understanding, and context. It involves generalization, reasoning, and the formation of conceptual frameworks. LLMs, by contrast, “learn” in the technical sense of optimizing billions of parameters to minimize statistical error on a prediction task: namely, predicting the next token in a sequence of text. This process does not involve comprehension or semantic insight; it is a purely mathematical procedure rooted in loss minimization and gradient descent. See Deep Learning (especially Chapter 5, Machine Learning Basics).
Referring to AI systems as “learning” from texts implies a process of understanding and intentional transformation that does not occur in practice. When courts say that an LLM “learns” from copyrighted works, they may inadvertently suggest that the model understands or internalizes the works’ meaning, themes, or aesthetic value—analogous to a student absorbing Shakespeare. But, in reality, the LLM is just adjusting numeric weights in a neural network to reflect probabilistic relationships between tokens. These adjustments encode correlations, not comprehension (but the field continues to evolve, and human comprehension in some sense, and perhaps in many instances, may be simply highly developed, layered correlations anyway).
3. “Training” Is Not a Dynamic, Ongoing Process
The metaphor of “training” often evokes the image of an adaptive system that continues to grow, change, or refine itself with experience. But in reality, once an LLM is “trained” (i.e., once the weights of its neural network are fixed), it does not continue to learn simply by responding to new prompts. In other words, an LLM does not grow smarter or more sophisticated from further use during inference stage. What an LLM does during inference is merely apply the statistical correlations embedded during “training.” (Candor requires the admission that LLMs are sometimes “retrained” and tweaked to improve their performance but that is like the difference between taking a break to go back to school and simply getting better at your job by doing your job repeatedly).
So when judges refer to “training” as if it were analogous to a LLM learning to write in the style of an author, they risk conflating the fixed encoding of statistical relationships with active imitation or comprehension. An LLM does not study an author’s work; the LLM compresses data about text into a form that allows it to make general-purpose linguistic predictions. Thus, this sort of “training” is not learning in a human sense: There is no comprehension or synthesis of meaning, only statistical pattern encoding. (Again, completeness cautions that we point out that human chess champions improve by analyzing chess moves and theories but also by enhancing a more reflexive visual pattern recognition).
This particular mixed “training” metaphor is one that will doubtless trouble courts as we move forward. We say that because, as seen in Meta, a well-intentioned court can recognize in one part of an opinion that comparing LLM development “’to training school children to write well’…,” as the Anthropic court had done, can be a most “inapt analogy” that “is not a basis for blowing off the most important factor in the fair use analysis,” Meta, at 3 (quoting Anthropic, at 28), yet continue to use the “learning” and “training” language throughout the remaining opinion, as if the metaphor works as long as school children are not specifically mentioned. Given the chess example (which also applies to NFL quarterbacks using visual pattern recognition and by analogy to musicians “who can play by ear” without being able to read musical notation because of human sound pattern recognition abilities), it is also a bit misleading for the Meta court (at 17) to say “this is not how a human reads a book” as if that is the same as saying that is not how humans learn and act — the examples of chess, football and music demonstrate that humans do in fact learn, and then perform, through pattern recognition that informs decisions and reactions without step by step analytic consciousness or considering each word or note.
4. LLMs Do Not Analyze or Interpret—They Pattern-Match
The process by which LLMs produce outputs is a sophisticated form of pattern-matching based on enormous token co-occurrence matrices. As has been noted, the model does not “compare” texts in the way a scholar might juxtapose themes or rhetoric; instead, it mechanically computes likelihoods of word sequences, drawing from its internal parameters shaped in “training.” An LLM cannot add meaning or message because an LLM generates merely plausible text by statistical continuation, not truthful text by intentional authorship (although lack of intention or consciousness does not preclude dangerously offensive output). When courts treat LLMs as if they perform acts of analysis or transformation akin to critical reading or authorship, they anthropomorphize statistical modeling. This can mislead legal reasoning—especially under the required Warhol fair use framework, which requires courts to assess whether a new work adds meaning or message and impacts markets.
5. Why This Matters for Fair Use
In Warhol, the Supreme Court made clear that the purpose and character of a use must be evaluated based on whether the new use adds a meaningful transformation and whether it competes in the same market. When courts treat LLM “training” as analogous to human learning, they risk ascribing a transformative character to what is in fact a computational, uncomprehending process, and, as one commentator noted (at 178), further shifts the fair use analysis away from the second artist’s “act of creation…to the particular use the plaintiff believes was infringing. This could create a significant and potentially unsettling shift in copyright law.” This inflation of “learning” can obscure the fact that the outputs may still closely track the inputs, or that the “training” process may substitute for licensed uses of the original works. In short: if “learning” is simply large-scale statistical pattern extraction from copyrighted works, it should not be presumed to be transformative under fair use merely because the process is technologically complex. The metaphors of “training” and “learning” should not shield courts from asking the real issue, namely that this use captures value from copyrighted material in a way that competes with the original.
- So, Now That You Understand The Terms, How Do LLMs Actually Work?
We were certainly tempted to have this section precede the previous one, but finally settled on this order because these are the terms that everyone uses, including the Copyright Office, so we decided it was better first to contextualize the word uses rather than merely disparage them. Since IP law has a long tradition of letting one be their own lexicographer, we figured that approach should fly, at least for today. So here’s a breakdown of how LLMs “train,” “copy,” and “use,” or don’t “copy” and “use,” a particular work during “training” and producing responses to prompts:
1. During “Training”: Indirect and Fragmented “Copying”
During “training” indirect and fragmented copying occur as the LLM processes vast amounts of text—potentially including the same work many times if it appears in multiple sources (e.g., pirated PDFs, public websites, databases). During “training,” a large language model (e.g., GPT, LLaMA) is exposed to vast datasets composed of tokenized text. These datasets can include books, web pages, news articles, source code, and other written materials. The “training” process involves the following:
- Tokenization: Text is broken into tokens (words, subwords, or characters).
- Gradient Descent: For each batch of text, the model predicts the next token and adjusts its internal parameters to reduce the prediction error.
- No Storage of Raw Text: The text is not stored within the LLM. Only statistical representations are encoded across the model’s parameter space.
Critically, as already noted, the model does not retain a full or retrievable copy of any “training” document. Instead, it encodes patterns such as word co-occurrences, syntactic structures, and common phrasings. The model does not store copies of the texts it was “trained” on. Instead, it updates its internal parameters (weights) based on statistical patterns in the text.
Consequently, one cannot reliably say the LLM “copied X work Y number of times.” There’s no explicit log of “how many times” a particular passage was used. Each token is processed individually, often batched, and may contribute incrementally to many weight updates. While studies show that LLMs can sometimes memorize and regurgitate copyrighted texts—especially if they are repeated often during “training,” these are edge cases and not the norm for every work. Thus, the model does ingest and process every token in its “training” dataset, potentially including multiple occurrences of the same work (especially if deduplication was not performed). The process is one of statistical assimilation, not literal or meaningful copying.
2. During Inference: No Copying Unless Memorization Occurs
Once “trained,” the LLM is fixed (frozen weights) and does not “learn” further unless explicitly fine-tuned or “retrained.” During inference, (i) a prompt is entered, (ii) the model generates output by selecting the most likely next token based on learned probability distributions, (iii) no “training” data is accessed or referenced directly, and (iv) the output is constructed on the fly using probabilistic relationships between tokens. During inference, verbatim copying or reproduction does not occur unless memorization happens. The LLM ordinarily generates text based on learned statistical patterns.
For most prompts, it produces plausible language that is not a copy of anything in its “training” set and it doesn’t access or retrieve the “training” data. If the model has memorized a passage (usually due to overrepresentation or high linguistic uniqueness), it might reproduce it verbatim if prompted specifically enough. But this kind of memorized output is rare. See Understanding Deep Learning (Still) Requires Rethinking Generalization; and see LLaMA: Open and Efficient Foundation Language Models.
- Though LLMs Can Work Without Word-For-Word Copying, Copyright Issues Remain
Although LLMs rarely reproduce copyrighted works verbatim, they often generate outputs that emulate the style, structure, tone, characters, themes, and expressive choices of protected works. LLMs are “trained” on vast body of textual data—including many copyrighted works—by optimizing for statistical relationships between tokens. As a result, they can generate content that mimics the stylistic, expressive choices of particular authors or franchises. Under copyright law, such mimicry can constitute infringement even if no exact language is copied, especially when it mars an author’s market in ways that “conflict with copyright’s basic objective: providing authors with exclusive rights that will spur creative expression.” Google, 141 S. Ct. at 1206.
This practice raises serious concerns under the doctrine of non-literal infringement, which prohibits unauthorized reproduction of protected expression beyond exact copying. Courts have long recognized that copyright protection extends beyond literal text to the “total concept and feel” of a work. The argument that LLMs’ outputs are non-infringing simply because they are not verbatim copies fails under established copyright law. Instead, outputs that replicate substantial patterns of expression—even probabilistically—may infringe protected elements under cases such as Castle Rock, 150 F.3d at 140 (“The test for infringement of a protected expression is substantial similarity… and substantial similarity does not require literal or verbatim copying.”)(emphasis added); see also Nichols v. Universal Pictures Corp., 45 F.2d 119 (2d Cir. 1930), Warner Bros. Inc. v. American Broad. Cos., 720 F.2d 231 (2d Cir. 1983), and Sheldon v. Metro-Goldwyn Pictures Corp., 81 F.2d 49 (2d Cir. 1936).
The Second Circuit in Castle Rock found that a trivia book based on Seinfeld infringed even though it did not copy the scripts directly. What mattered was that the derivative work drew heavily from (in essence “copied” in a prohibited legal sense though not physically or letter by letter or word for word) the “unique characters, dialogue, and plotlines” that constitute the show’s protectable expression. Similarly, LLMs can output summaries, imitations, or parodies that borrow from a copyrighted work’s characters, fictional world, tone, and structure, all of which may be protectable. For example, an LLM “trained” on J.K. Rowling’s Harry Potter corpus may generate outputs mimicking the “wizarding world” setting, character archetypes, and distinctive voice; likewise, an LLM “trained” on George R.R. Martin’s Game of Thrones works might emulate his narrative style and dark political intrigue—even without verbatim copying. Such expressive elements, if original and fixed in the source material, are subject to copyright protection. Their unauthorized reproduction—even indirectly—can give rise to actionable non-literal infringement.
“The proper inquiry is whether the defendant has misappropriated ‘the protectible expression of the plaintiff’s work, rather than the ideas contained in the plaintiff’s work.’” Castle Rock, 150 F.3d at 139 (quoting Reyher v. Children’s Television Workshop, 533 F.2d 87, 91 (2d Cir. 1976)); see also Steinberg v. Columbia Pictures Indus., Inc., 663 F. Supp. 706, 710 (S.D.N.Y. 1987)(“[T]he essence of infringement lies in taking the artistic expression itself… not necessarily the exact form.”). The automated generation of patterns resembling protectable expression—especially without licensing—may qualify as such misappropriation, especially when one understands “’fair use’ doctrine as an ‘equitable rule of reason’ that ‘permits courts to avoid rigid application of the copyright statute…’” because “the concept is flexible” and must be applied “in light of the sometimes conflicting aims of copyright law…” Google, 141 S. Ct. at 1196-97 (quoting Stewart v. Abend, 495 U.S. 27, 236 (1990)).
Likewise, the “total concept and feel” standard, developed in Roth Greeting Cards v. United Card Co., 429 F.2d 1106 (9th Cir. 1970), and applied in Warner Bros. v. ABC, 720 F.2d 231 (2d Cir. 1983), has been used to determine infringement where the accused work emulates the style, mood, tone, and presentation of the original work. “A work may be infringing even if it does not copy any single phrase or sentence verbatim, if it substantially mimics the protected structure, sequencing, and overall expressive atmosphere of the original.” Warner Bros., 720 F.2d at 241–43. LLMs “trained” on particular copyrighted works may generate outputs with the same narrative beats, atmosphere, character interplay, and storytelling cadence as the originals—particularly when prompted with similar context. Because the outputs can replicate the expressive choices of the original author (even probabilistically), the “total concept and feel” doctrine may apply. This is analogous to fan fiction, which courts have found may infringe depending on the extent of expressive borrowing (and about which one of us has previously written here and here) and Blurred Lines-type music infringement claims (addressed here and here). These approaches coalesce, or were predicted, in some sense under Judge Learned Hand’s “abstractions test” from Nichols v. Universal Pictures Corp., 45 F.2d at 121-23, which asks courts to distinguish protected expression from unprotected ideas by parsing a work into successive layers of abstraction. As one moves from the general idea to specific expression, the law recognizes that more elements become protectable.
But LLMs are not bound by the idea-expression dichotomy in a legally meaningful way. They statistically compress all levels of a work, including plot structures, dialogue formats, thematic motifs, character development arcs, and syntax and word choice patterns. Thus, even if an LLM does not reproduce exact language, it may still output content derived from expression levels that courts have found to be protected.
III. Concluding Thoughts (For Now And Until We Get “Re-Trained” Like LLMs)
So what is the use of trying to figure this all out? Some argue that because LLMs rely on stochastic (i.e. randomly determined), algorithmic processes, they cannot “copy,” “train,” or “learn” in the legal sense, and therefore copyright law should not apply at all. But courts have never held that intent is necessary for infringement, and computational methods do not immunize otherwise infringing results. See, e.g., Religious Tech. Ctr. v. Netcom On-Line Commc’n Servs., Inc., 907 F. Supp. 1361, 1367 (N.D. Cal. 1995)(“Direct infringement does not require intent or any particular state of mind, although willfulness is relevant to the award of statutory damages.”); MAI Systems Corp. v. Peak Computer, Inc., 991 F.2d 511, 518 (9th Cir. 1993)(even temporary or intermediate copies made by machines may constitute infringement under §106(1)).Thus, even if an LLM is operating autonomously and without human creative input, the party deploying the LLM may still be liable for infringing outputs, particularly if the “training” data included copyrighted content without license.
More importantly, LLMs are not neutral tools when they have ingested copyrighted expressive works for the purpose of being able to generate outputs. Under long-standing doctrines of non-literal infringement, such as those in discussed above, the reproduction of patterns, style, or “total concept and feel” may be sufficient to constitute infringement where there is imitation of protected elements. Courts evaluating fair use and infringement in the AI context therefore must recognize at least the possibility that non-literal reproduction by statistical modeling is not categorically different from traditional derivative works—it is simply a new method of achieving of form of expressive duplication.
As predicted in a case usually known as Google, but which, given its prognostications, we can here refer to as Oracle, 141 S. Ct. at 1198-99, fair use can:
play an important role in determining the lawful scope of a computer program copyright, such as the copyright at issue here. It can help to distinguish among technologies. [Fair use] can distinguish between expressive and functional features of computer code where those features are mixed. It can focus on the legitimate need to provide incentives to produce copyrighted material while examining the extent to which yet further protection creates unrelated or illegitimate harms …In a word, [fair use] can carry out its basic purpose of providing a context-based check[…]
…Just as fair use distinguishes among books and films, which are indisputably subjects of copyright, so too must it draw lines among computer programs. And just as fair use takes account of the market in which scripts and paintings are bought and sold, so too must it consider the realities of how technological works are created and disseminated. We do not believe that an approach close to ‘all or nothing’ would be faithful to the Copyright Act’s overall design.
Either way, a new approach to protecting simultaneously rights of both original artists/authors/creators’ and generative AI entrepreneurs’ investments of time and treasure is needed. As Google/Oracle, 141 S. Ct. at 1208-09, noted, “fair use has long proved a cooperative effort of Legislatures and courts,” and it is high time that cooperation kicked into gear, as noted here before on three separate occasions. Until then, as seen in this piece’s text and previewed in its title, these recent opinions included some missed Anthropic perspectives on what is at stake, and some mixed AI Meta-phors that can cloud (rather than clarify) copyright law.
______________
* Ms. Tagavi is a rising third-year Fordham Law School student interning at Epstein Becker Green during the Summer of 2025.