Government Pressure on Regulators Could Prompt Premature AI Adoption and a Surge in Judicial Review

Jenner & Block
Contact

Jenner & Block

In March 2025, the UK government met with regulators to push for faster decision-making processes as a part of efforts by Chancellor of the Exchequer Rachel Reeves to cut red tape and boost economic growth. But while the intention is to boost efficiency, increased pressure on regulators risks corners being cut and opens the decision-makers up to questions about the fairness of their decisions, particularly at a time when public authorities are increasingly turning to Artificial Intelligence (AI) for assistance.

The rapid emergence of AI has led to its increasing use across society, including by public authorities that are governed by public law. And the government has made no secret of its intention to escalate the use of AI throughout the public sector. In February 2025, the government launched the ‘AI Playbook’ to provide organisations with guidance on using AI, and the prime minister has spoken openly about accelerating AI adoption. It is therefore likely that regulators, under this pressure, may turn to AI to meet performance targets.

However, decisions of public bodies must be fair, reasonable, proportionate, and lawful. These are standards that AI, at its current stage of development, may struggle to consistently and effectively meet. As the use of AI in public decision-making is only likely to increase with pressure on regulators to make decisions more quickly, the government may inadvertently be opening the door to litigation from those who feel that decisions have been made with a greater reliance on AI and thus are not fair. The irony is that by pushing regulators to be more efficient, the whole process could be slowed down by a more litigious environment. In this client alert we underline that a quick regulator is not necessarily a good regulator, and that efficiency might not ultimately be achieved by pressuring decision-makers into cutting corners.

Government Push for Regulatory Efficiency

On 17 March 2025, the chancellor met with Britain's biggest regulators and watchdogs to inform them they will be given targets and required to undergo twice-yearly reviews around streamlining the system of decision-making. Bodies including the Financial Conduct Authority and the Health and Safety Executive have been told that, as part of plans to boost the economy, red tape needs to be cut, and decisions must be made more quickly.

The rationale for the government is that efficiency is a driver for growth. However, decisions made by regulators are often complicated. For example, licences or permits which need to be granted by the likes of the Environment Agency or Civil Aviation Authority are complex and require detailed due diligence to be undertaken before a decision is made. This needs time, and efficiency is not the same thing as speed. Reaching a correct outcome cannot always be achieved quickly.

The chancellor is looking at implementing targets regarding how quickly decisions on planning applications and new licences for businesses should be made. But pressuring regulators to act quickly will not necessarily mean the system of decision-making is more effective. A key purpose of a regulator is the protection of the consumer, and enforcing the standards which are set through regulation of industries can require time. Speeding up the process risks leading to standards slipping.

Not only is it likely that trying to make decisions faster will lead to poorer and more challengeable decisions, the increased pressure to act quickly may also mean regulators are tempted to seek assistance from technology and specifically Artificial Intelligence.

The Growing Role of AI in Decision-Making

The use of Artificial Intelligence is increasing throughout the world and society. This includes public bodies governed by public law, who are resorting to using AI more and more in their decision-making because there are ostensible benefits such as reduction of human error and increased speed and efficiency. It is not a stretch, therefore, to imagine that a regulator being pressured to make decisions more quickly may ramp up its use of a technology which speeds up the process. This is made even more likely when considering that the government, in its ‘AI Opportunities Action Plan’, has already committed to funding regulators to scale up their AI capabilities. The plan states that regulators’ AI capabilities need addressing urgently and encourages the adoption of AI along with the publication of reports explaining how sector growth has been driven by AI.

The rapid adoption of AI in complex decision-making comes with substantial risks, however. AI technology is still at a relatively early stage of development and there are fundamental question marks over the quality of what is produced by AI, which can include, for example, hallucinations, incomplete data, and unpredictable or incomprehensible decision-making methods. Outputs by AI models can produce content that appears coherent but may in fact be false or biased. This is concerning not only for public bodies relying on an AI model to make important and consequential decisions, but also for those affected by such decisions.

Moreover, systems are still evolving and there often remains a gap between the technical output of AI and the human understanding of why output was produced. If public bodies are increasingly deploying a highly technical tool to make decisions, they will need training to understand the capabilities and limitations of the AI tool in order to be able to exercise appropriate human oversight, to understand why decisions are made, and to justify or explain the decisions.

Deploying AI opens up public law decision-making to criticism. Those affected by AI-influenced decisions might feel that they have legitimate grounds for complaint and even legal challenge.

Legal Ramifications: The Rise of Judicial Review

Relying on AI could lead to an environment fertile for judicial review. Judicial review is a specific type of claim brought in the UK High Court and concerned with the legality of the decision-making process and the decision itself, rather than an appeal on the merits. The main grounds for judicial review are illegality, irrationality / unreasonableness, and procedural unfairness. We anticipate that the latter two grounds are the most likely to be applicable in the context of decisions made by or with the assistance of AI.

An AI hallucination or fabricated output might open the door to an argument that the decision was irrational. Likewise, where an automated decision leads to a different result than in similar cases, or the outcome is discriminatory, that would also give rise to the ability to challenge the decision on grounds of irrationality.

As far as procedural fairness is concerned, there will be circumstances where a public body has a duty to give a fair opportunity to a person affected by a decision to know what information the public body intends to rely on in making its decision.[1] In cases where there is a lack of transparency as to how the decision was reached, or an inability on the part of a regulator to explain the basis for its decision, this could give rise to arguments that the regulator in question has failed to take into account relevant considerations, ignored all irrelevant considerations, or has otherwise failed to act in a way that is procedurally fair.

Finally, it is not impossible to conceive of an illegality ground in the context of AI decision-making. Public authorities are expected to follow policies they have published[2], and AI may not appreciate the statutory purpose of an applicable policy, or may over rely on an algorithm, giving rise to an argument that the public body in question has fettered the application of its own discretion or is not compliant with the purpose and objective of a relevant policy.

Achieving the Opposite Effect

Pressure from the UK government on regulators to speed things up, while perhaps well- intentioned, may have the opposite effect. A quick regulator may fall short of being an effective one, and standards of decision-making processes are liable to suffer if targets on timings are set.

And with the UK government showing its clear intent for AI to be adopted and utilised by public bodies, it would be no surprise to see regulators fast-track the use of AI into their decision-making processes. However, using AI does not guarantee that things will be done more efficiently. AI technology is in its nascency, and although there is a wider discussion to be had on its benefits or disbenefits to society, it is important to appreciate that at this moment in time there are inherent flaws with the technology.

Hastily turning to AI for important, complex, and consequential decisions affecting the public might see regulators being the subject of a far greater number of complaints and challenges due to the issues that can arise when utilising AI. The result could be a regulatory landscape that is more litigious than currently. Ironically, the push for speed may result in more delays as the regulatory process is slowed down by an environment where a much higher number of decisions are subject to judicial review as a result of the use of AI – particularly in the absence of concrete legislative safeguards relating to the deployment of AI by the public sector. It is also likely that, while we are reliant on public law principles to police the use of AI by regulators and other public bodies, we will see an evolution of the case law and of administrative law procedure, especially as concerns the use of expert evidence and the burden of proof in judicial review.

Ultimately, a careful balance must be struck between streamlining regulation and safeguarding the fairness, legality, and public confidence in decision-making – especially when emerging technologies like AI are involved.


Footnotes

[1] R v Secretary of State for the Home Department, ex parte Doody [1993] 1 A.C. 531

[2] R (Milner) v South Central Strategic Health Authority [2011] EWHC 218 (Admin)

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Jenner & Block

Written by:

Jenner & Block
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Jenner & Block on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide