The Preemption Doctrine: A Necessary Course Correction After Recentive v. Fox

McDonnell Boehnen Hulbert & Berghoff LLP
Contact

McDonnell Boehnen Hulbert & Berghoff LLP

The landscape of patent law for artificial intelligence (AI) and machine learning (ML) innovations has become fraught with uncertainty. The U.S. Court of Appeals for the Federal Circuit's precedential opinion in Recentive Analytics, Inc. v. Fox Corp.[1], issued on April 18, 2025, represents a watershed moment for the starkness with which it exposes the fundamental inadequacies of the current subject matter eligibility approach under 35 U.S.C. § 101. In its first opinion addressing the patent eligibility of ML technologies, the Court affirmed the invalidation of four patents, establishing a formidable and flawed precedent that threatens to stifle innovation in one of the most critical technological sectors of our time. The decision, while consistent with a problematic line of jurisprudence stemming from Alice Corp. v. CLS Bank International[2], serves as a quintessential example of how the two-step Alice/Mayo framework is ill-equipped to evaluate the true inventive character of ML inventions. By mischaracterizing applied ML as "generic" and dismissing the very process of training as non-inventive, the Court has constructed a legal framework that is profoundly disconnected from technological reality, creating a potentially insurmountable barrier for patenting a vast and vital category of AI innovations. A meticulous deconstruction of the Recentive decision reveals not just a flawed outcome, but a flawed legal test as well that is in urgent need of reform.

I. The Factual and Procedural Background of Recentive

The case centered on four patents held by Recentive Analytics, which fell into two families: the "Machine Learning Training" patents (U.S. Patent Nos. 11,386,367 and 11,537,960) and the "Network Map" patents (U.S. Patent Nos. 10,911,811 and 10,958,957). The patents claimed methods and systems for using ML to optimize the scheduling of live television events and to generate dynamic network maps for broadcasters; tasks that, as the Court noted, were previously performed manually by humans. Recentive sued Fox Corp. and its affiliates for infringement, and Fox responded with a motion to dismiss, arguing that the patents were directed to ineligible subject matter under § 101.

The case highlights by their absence some drafting "best practices" that were not followed and that eventually weighed against the patentee. For example, the drafters included language stating that the invention could use "any suitable machine learning technique,"[3] providing a laundry list of conventional models such as neural networks, decision trees, support vector machines, and regressions. This broad, non-specific language provided the Court with textual basis to characterize the claimed technology as "generic" and "conventional."

Additionally, during litigation, the patentee admitted that it was "not claiming machine learning itself" and that the patents did not claim a specific method for "improving the mathematical algorithm or making machine learning better." Recentive also acknowledged that "the concept of preparing network maps[] [had] existed for a long time," and that prior to computers, "networks were preparing these network maps with human beings."[4] These admissions effectively allowed the Court to conclude that the patents did nothing more than apply existing, off-the-shelf technology to a new field. Unfortunately, the perceived weaknesses related to the Recentive case have led to the creation of a broad, damaging precedent for the entire AI industry, a precedent that future innovators must now navigate.

A. The Alice Step One Analysis: An Overly Broad Characterization of the "Abstract Idea"

Applying the first step of the Alice/Mayo framework, the Federal Circuit had to determine whether the claims were "directed to" a patent-ineligible concept, such as an abstract idea. The Court affirmed the District Court's finding that they were, characterizing the patents as being directed to the abstract ideas of "producing event schedules and network maps using known generic mathematical techniques".

The core of the Court's reasoning culminated in an overbroad holding: "patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101."[5] This holding effectively invalidates a common and crucial form of innovation in the AI field: the novel application of existing ML tools to solve problems in new domains.

The Court explicitly rejected Recentive's argument that applying ML to a new field of use -- in this case, television broadcasting and event scheduling -- could confer patent eligibility. Citing established precedent, the Court reiterated that "[a]n abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment."[6] Similarly, the Court was unpersuaded by the argument that the invention was patentable because it applied existing technology to a novel database or data environment.[7] This line of reasoning is particularly problematic for AI, where the curation and application of novel datasets is often a key part of the inventive process.

Furthermore, the Court dismissed Recentive's claims that its inventions were patent-eligible by the fact that "(using existing machine learning technology) they perform a task previously undertaken by humans with greater speed and efficiency than could previously be achieved."[8] The Federal Circuit has consistently held that improvements in speed or efficiency resulting from the use of a generic computer do not, by themselves, create eligibility unless they are based on improved computer techniques.

By equating a sophisticated, trained ML model with a "generic computer," the Court overlooked the transformative technological leap that such automation represents. This line of reasoning has been problematic even prior to Recentive. Improvements in speed or efficiency resulting from the use of a generic computer or limiting an abstract idea to a particular technological environment are issues that are best handled using the evidentiary approach under Sections 102 and 103. Indeed, some of these issues should be analyzed from a viewpoint of a person of ordinary skill at the time of the invention. However, by bringing such issues under the ambit of Section 101, courts have created a legal framework that is confusing, arbitrary, unreliable, and utterly subjective. This overly broad characterization at Alice Step One set the stage for the claims' inevitable failure at Step Two.

B. The Alice Step Two Analysis: The Search for a Non-Existent "Inventive Concept"

Having concluded that the claims were directed to an abstract idea, the Court proceeded to the second step of the Alice test, which asks whether the claims include an "inventive concept" sufficient to "transform" the abstract idea into a patent-eligible application. The Court searched for something "significantly more" than the abstract idea itself but concluded that it could "perceive nothing in the claims" that met this standard.[9]

Recentive argued that the inventive concept was the use of "machine learning to dynamically generate optimized maps and schedules based on real-time data and update them based on changing conditions."[10] The Court dismissed this argument, stating that this was "no more than claiming the abstract idea [of generating event schedules and network maps] itself"[11] and failed to identify any transformative element.

The Court's most consequential finding at Step Two was its characterization of core ML processes as non-inventive. It stated that "requirements that the machine learning model be 'iteratively trained' or dynamically adjusted in the Machine Learning Training patents do not represent a technological improvement."[12] This was despite Recentive's argument that its application of machine learning is not generic because "Recentive worked out how to make the algorithms function dynamically, so the maps and schedules are automatically customizable and updated with real-time data,"[13] . . . and because "Recentive's methods unearth 'useful patterns' that had previously been buried in the data, unrecognizable to humans."[14] This alone should have been sufficient for a finding of subject matter eligibility.

The Court's analysis reveals a profound judicial misunderstanding of where innovation in applied ML frequently resides. The specific process of iterative training on a dataset to solve a particular problem is often the very heart of the invention. By deeming this process inherent and routine, the Court effectively precludes from patent protection a major avenue of AI innovation. Moreover, when a "generic" ML model is trained to perform a specific task on a specific training dataset, this clearly transforms the "generic" ML model into a special purpose ML model, which should be subject matter eligible. Whether such a special purpose ML model and/or use thereof is patent-eligible is properly a consideration under Sections 102, 103, and 112.

Throughout its analysis, the Court repeatedly returned to the theme of the "missing 'how.'" It faulted the patents because "neither the claims nor the specifications describe how such an improvement was accomplished"[15] and failed to "delineate steps through which the machine learning technology achieves an improvement."[16] This demand for a detailed, step-by-step recitation of "how" a technological improvement is achieved is fundamentally misaligned with the nature of ML. In many cases, the improvement is an emergent property of the training process; the model learns to recognize patterns that may be too complex for a human to explicitly define or describe or even know. To demand a precise explanation of "how" a neural network achieves its optimized state is to demand an explanation of the unexplainable, setting a standard that is not just high, but often technologically impossible. This judicial skepticism toward emergent properties and learning-based systems indicates that the Alice/Mayo framework is not merely being applied erroneously; it is being applied in a way that is structurally biased against the very nature of modern AI innovation.

II. The "Generic" Model Fallacy: A Fundamental Disconnect Between Patent Law and ML Reality

The Recentive decision rests upon a flawed premise: that the patents claimed the mere application of a "generic" machine learning model. This characterization, while convenient for the Court's legal analysis, is a legal fiction that willfully ignores the technical realities of how machine learning models are developed, trained, and deployed. It creates a false dichotomy that serves to invalidate legitimate technological advancements and reveals a deep, structural disconnect between § 101 jurisprudence and the actual practice of AI innovation. Challenging this "generic model" fallacy is essential to understanding why the current patent eligibility test is failing.

A. General-Purpose vs. Special-Purpose AI: A Critical Distinction

To understand the Court's error, one must first grasp the critical technical distinction between general-purpose and special-purpose AI models. Foundation models, also referred to as general-purpose AI (GPAI), are large models like OpenAI's ChatGPT[17], Meta's LLaMA[18], Google's Gemini[19], image generation models such as OpenAI's DALL-E[20], and so forth which are pre-trained on vast, diverse datasets, and often using unsupervised or self-supervised learning. These models are the "general" tools of the AI world. They are not typically designed for any single task but possess a broad range of capabilities, such as text summarization, language translation, and image generation. They are, in essence, powerful but unspecialized starting points. Typically, prompts are used to discover the emergent utilities of such models. Such models are also extremely expensive and cannot be developed without vast investments of time, money, and specialized personnel.[21]

In contrast, specialized models, also referred to as narrow AI, are AI systems that have been designed, trained, or fine-tuned to perform a specific task or operate within a specific domain with high efficiency and accuracy. A model designed to improve diagnosis and drug discovery in healthcare, enhance risk management and fraud detection in finance, perform predictive maintenance and production optimization in manufacturing, or, as in Recentive's case, optimize television schedules, is a specialized model. These models often possess more streamlined architectures and are more cost-effective for their narrow purpose, but they require extensive, carefully curated, domain-specific training data to achieve their specialized function. A primary and ubiquitous method for creating a specialized model is to take a general-purpose architecture (or even a pre-trained foundation model) and train or fine-tune it on a specific dataset to solve a specific problem.

B. The Act of Training as an Act of Invention: Transforming a General Tool into a Specific Machine

The central error in the Recentive court's analysis was its failure to recognize this distinction. The Court focused on the starting point of the invention -- a generic class of algorithms like "neural networks" or "decision trees" listed in the specification -- rather than the end product of the invention: a fully trained, specialized model capable of performing a specific, useful function it could not perform before training.

The act of training a ML model itself is an act of invention that should be subject matter eligible. The patentability of this act of invention is a separate consideration. The process of collecting and curating a specific dataset (e.g., historical broadcast schedules, event parameters, target features, venue locations, and ticket sales, as described in the Recentive patents) and then applying computational resources to train a model on that data is what transforms the general algorithmic tool into a special-purpose machine.

Once trained, the model is no longer "generic." It is a new technological artifact. Its internal configuration, such as the millions or billions of weights and biases within its neural network, has been fundamentally altered and optimized to embody the patterns and relationships learned from the training data. This newly configured model is a specific machine, designed and built for a particular purpose. The Court's dismissal of this transformation as a mere "application" of an abstract idea is akin to saying that programming a general-purpose computer to perform a new and useful task is not a technological improvement because the underlying computer hardware is "generic."

This reveals a fundamental contradiction at the heart of the court's reasoning. On one hand, the Court dismisses the process of "iterative training" as an inherent, non-inventive aspect of machine learning. On the other hand, it demands that the patentee show a specific improvement to the machine learning model itself. For the vast majority of applied AI inventions, the specific improvement is the direct result of that very training process. The trained model is the improved model. By treating the process as inherent and the resulting application as abstract, the Court creates an inescapable trap that leaves no conceptual space for patentability. This approach, which dissects the invention into its constituent parts ("known training technique" and "abstract application domain") and finds no eligible subject matter in the pieces, fails to recognize the inventive character of the integrated whole. This is a well-recognized flaw in § 101 analysis, but one that is particularly damaging in the context of AI.

C. The Legal Consequences of the "Generic" Fallacy

By enshrining the "generic model" fallacy into precedential law, the Recentive decision has profound and damaging consequences. It effectively declares that one of the most common and powerful forms of innovation in the modern technology industry -- the practical application and specialization of ML models to solve real-world problems -- is presumptively patent-ineligible.

This essentially creates a paradox: the more fundamental and broadly applicable a foundational ML technique (like a neural network architecture) becomes, the less likely its specific, value-creating applications are to be deemed patentable. This is contrary to the constitutional purpose of the patent system, which is to incentivize the reduction of abstract principles to practice. The decision signals that the current § 101 framework is inherently biased toward inventions that can be easily described as a novel physical mechanism or a discrete, new mathematical algorithm. This framework is structurally incapable of properly valuing inventions where the innovation is embodied in the complex configuration of a system (like the weights of a neural network) that is achieved through a process of data-driven learning.

The inevitable result of this legal uncertainty and subjectivity is that innovators will be pushed away from the patent system and toward trade secret protection. While trade secrecy can protect certain AI innovations, such as model weights or proprietary training data, it does so at a great cost to society. The patent system's core bargain is one of disclosure in exchange for a limited monopoly. By making patents for applied AI exceedingly difficult to obtain and enforce, the law discourages this disclosure, slowing the overall pace of innovation and preventing the public dissemination of knowledge that is crucial for building the next generation of technology. At the same time, foreign nations may allow such innovations to be patentable, thereby creating a competitive barrier to American commerce and economic leadership.

III. Restoring the Cornerstone of § 101: Preemption as a Superior Analytical Framework

The labyrinthine complexity of the Alice/Mayo framework, with its ambiguous steps and technologically unsound assumptions, is not an accidental feature; it is a symptom of a deeper doctrinal malady. The test has become unmoored from its conceptual anchor. The current two-step inquiry should be set aside in favor of a direct analysis of preemption. This approach, which is more faithful to the Supreme Court's own foundational jurisprudence, offers a clearer, more consistent, and technologically neutral standard for judging the patent eligibility of machine learning and other computer-implemented inventions. It is the necessary course correction to rescue § 101 from its current state of incoherence.

A. The Doctrinal Foundation: Preemption as the "Ultimate Touchstone" of § 101

A historical review of the judicial exceptions to patent eligibility -- laws of nature, natural phenomena, and abstract ideas -- reveals a single, unifying concern: preemption. As noted in Alice, the Supreme Court has "repeatedly emphasized this . . . concern that patent law not inhibit further discovery by improperly tying up the future use of these building blocks of human ingenuity."[22] The fear is that granting a patent on a basic tool of science or technology, such as a mathematical formula or a fundamental economic practice, would "tend to impede innovation more than it would tend to promote it."[23]

This concern animates the Court's landmark § 101 decisions. In Gottschalk v. Benson[24], the Court invalidated a claim for converting numerical information because it would "wholly pre-empt the mathematical formula" and effectively be a patent on the algorithm itself. Conversely, in Diamond v. Diehr[25], the Court found claims using the Arrhenius equation to be eligible because they were tied to a specific industrial process for curing rubber and did not preempt all uses of the equation. More recently, in Bilski v. Kappos[26], the Court cautioned against claims that would "pre-empt the use of [an] approach in all fields," and in Alice itself, the Court unequivocally stated that "the concern that drives this exclusionary principle is one of pre-emption". Preemption is not merely a peripheral consideration; it is the "ultimate touchstone" and the very reason the judicial exceptions exist.

B. The Jurisprudential Drift: How the Federal Circuit Unmoored the Test from Preemption

Despite the Supreme Court's clear guidance, the Federal Circuit's application of the two-step Alice/Mayo framework has evolved into a rigid, formalistic exercise that frequently sidelines, and at times openly dismisses, the core preemption inquiry. The test has become an end in itself, rather than a means to address the underlying policy concern.

This jurisprudential drift is most evident in the Federal Circuit's explicit holdings on the matter. In cases like Ariosa Diagnostics, Inc. v. Sequenom, Inc.[27], the Court has declared that "the absence of complete preemption does not demonstrate patent eligibility."[28] This statement represents a stunning departure from the doctrine's foundational logic. If the sole purpose of the judicial exception is to prevent the harm of preemption, then the demonstrated absence of that harm should be dispositive of eligibility. To hold otherwise is to allow the exception to "swallow all of patent law."[29]

The "abstract idea" category, in particular, has been expanded far beyond its intended role as a proxy for preemptive claims. It has become a mechanism for courts to invalidate claims that are specific, applied, and non-preemptive simply because their subject matter can be generalized to a high level of abstraction, such as a "fundamental economic practice" or a "mental process." This has led to the erratic and unpredictable results that now plague § 101 jurisprudence, creating an environment of profound uncertainty for innovators, investors, and the patent system as a whole.

The internal logic of the two-step framework is itself flawed. The test allows, and even encourages, a court at Step One to label a claim as being "directed to" an abstract idea even if the claim is narrow and poses no realistic threat of preemption. This initial characterization, however, is not neutral; it taints the entire subsequent analysis. At Step Two, the Court is then asked to search for an "inventive concept," but the elements of the claim that were used to label it "abstract" in the first place are effectively discounted. This makes it nearly impossible for a claim, once branded "abstract," to be saved. The result is that specific, practical, and non-preemptive inventions are routinely invalidated. The Supreme Court created the "abstract idea" category as a tool to identify potentially preemptive claims; the Federal Circuit has transformed the tool into the test itself. By focusing on whether a claim can be described as abstract, rather than whether it is preemptive in its scope, the Court has inverted the analysis and created a doctrine that is inconsistent and untethered from its policy moorings.

C. A Better Way Forward: Applying a Preemption-First Analysis

One remedy for this doctrinal confusion is to return to first principles. The vague, two-step Alice/Mayo inquiry should be replaced with a more direct and coherent test focused on the only question that matters: does the claim, as a whole, preempt all practical applications of a law of nature, natural phenomenon, or abstract idea? If the answer is no, the claim should be deemed patent-eligible under § 101, and the examination should proceed to the substantive requirements of novelty (§ 102), non-obviousness (§ 103), and enablement (§ 112), which are the proper statutory vehicles for assessing the inventive merit of a specific technological implementation.

Applying this preemption-first analysis to the facts of Recentive demonstrates its superiority. The claims at issue were directed to a specific application: using ML to generate optimized television schedules and network maps based on specific types of data, such as historical broadcast information and real-time parameters.

• Did these claims preempt the abstract idea of scheduling? No. Humans and other computer systems remained free to create schedules.
• Did they preempt all uses of machine learning? Clearly not.
• Did they even preempt all uses of machine learning for scheduling? No. A competing innovator would remain free to develop a different scheduling system using a different ML model, a different set of training data, different pre- or post-processing techniques, or different feature engineering to achieve a similar result. The claims did not "wholly pre-empt" a fundamental building block.

Because the claims under consideration in Recentive posed no danger of disproportionately tying up a basic tool of science or commerce, they should have been found eligible under a proper, preemption-focused § 101 analysis. Whether Recentive's specific implementation was novel over the prior art or non-obvious to a person of ordinary skill in the art are separate, important questions -- but they are questions for §§ 102 and 103, not § 101. A preemption-centric framework would restore § 101 to its intended role as a "coarse filter" designed to screen out only those claims that truly seek to monopolize a fundamental principle, not those that claim a specific, practical application of one.

IV. Conclusion: Realigning Patent Law with Technological Reality

The Federal Circuit's decision in Recentive is not an anomaly but a predictable outcome of a broken legal framework where a flawed reasoning has been reinforced and amplified. The ruling, which effectively deems the common practice of applying known machine learning techniques to new domains as patent-ineligible, is built upon the "generic model" fallacy -- a fundamental misapprehension of how innovation in artificial intelligence occurs. By dismissing the transformative act of training as "inherent" and demanding a specific, articulable improvement to the underlying algorithm, the Court has created a standard that is profoundly misaligned with the emergent, data-driven nature of modern technology.

The ultimate solution lies not in more intricate drafting strategies but in doctrinal reform. The Alice/Mayo two-step test, having become unmoored from its conceptual foundation, has proven to be an unreliable and technologically unsound instrument. Its vague standards and internal contradictions have injected a level of uncertainty into patent law that chills investment and discourages the public disclosure that the patent system is meant to foster.

A clear, direct, and principled analysis based on preemption is the viable path forward. Restoring preemption as the cornerstone of the § 101 inquiry would provide the clarity, predictability, and technological neutrality that the patent system desperately needs. It would properly relegate § 101 to its intended role as a coarse filter, shifting the substantive examination of an invention's merit back to the well-established doctrines of novelty, non-obviousness, and enablement. Such a reform, whether enacted by the judiciary or through legislative action, is essential to realign U.S. patent law with the realities of modern innovation and to ensure that the law continues to fulfill its constitutional mandate to "promote the Progress of Science and useful Arts."

[1] Recentive Analytics, Inc. v. Fox Corp., No. 23-2437 (Fed. Cir. 2025)

[2] Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208 (2014)

[3] Recentive Analytics, Inc. v. Fox Corp. at 5

[4] Id. at 7

[5] Id. at 18

[6] Id. at 14 (citing Intell. Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 1366 (Fed. Cir. 2015))

[7] Id. at 14-15

[8] Id. at 15

[9] See, generally, id. at 16

[10] Id.

[11] Id. at 17

[12] Id. at 12

[13] Appellant's Reply Br. 2,

[14] Id.

[15] Id. at 12-13

[16] Id. at 13

[17] https://openai.com/chatgpt/overview/

[18] https://www.llama.com/

[19] https://ai.google.dev/gemini-api/docs/models

[20] https://openai.com/index/dall-e/

[21] https://www.tensorops.ai/post/understanding-the-cost-of-large-language-models-llms

[22] Alice Corp. v. CLS Bank Int'l, 573 U.S. 208, 214 (2014)

[23] See, e.g., Mayo Collaborative Services v. Prometheus Labs., Inc., 566 U.S. 66, 71 (2012)

[24] Gottschalk v. Benson, 409 U.S. 63 (1972)

[25] Diamond v. Diehr, 450 U.S. 175 (1981)

[26] Bilski v. Kappos, 561 U.S. 593 (2010)

[27] Ariosa Diagnostics, Inc. v. Sequenom, Inc., 788 F.3d 1371 (Fed. Cir. 2015)

[28] Id. at 1379

[29] Alice Corp. v. CLS Bank Int'l, 573 U.S. 208, 217 (2014)

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© McDonnell Boehnen Hulbert & Berghoff LLP

Written by:

McDonnell Boehnen Hulbert & Berghoff LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

McDonnell Boehnen Hulbert & Berghoff LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide