Artificial Intelligence and Administrative Law: The UK’s Search for a New Framework, by Joe Tomlinson & Brendan McGurk
This post is the eleventh contribution to Notice & Comment’s symposium on AI and the APA. For other posts in the series, click here.
The questions animating this symposium—how administrative law should adapt to the rise of artificial intelligence—are hardly confined to the United States. The United Kingdom, like many other jurisdictions, is grappling with the same foundational tension: can traditional administrative law frameworks meaningfully govern decisions made or assisted by AI? Or do we need new legislation tailored to the distinctive risks and opportunities for innovation that AI in administration presents? The UK’s approach is evolving quickly. In this post, we outline where things currently stand and how the possibility of a bespoke legislative framework for AI in administration—which only recently seemed a distant possibility—has now moved firmly into the realm of plausibility.
The UK government has been moving with striking speed to embed AI across public administration. Ambitions are high—better outcomes for lower costs at scale—but implementation has been uneven. Even so, judges are now openly using AI tools in their work, and large-scale administrative deployments are multiplying. More than one hundred examples are now available via a government-led transparency register, which, by way of concrete examples, includes the following use cases:
- Value Added Tax (‘VAT’) is the UK’s sales/consumption tax on the purchase of goods and services. A tool is used to detect anomalous values within a trader’s VAT Return history, for instance to identify when unusually low amounts of taxable income are declared, for further investigation (see here);
- Use of chatbots across public administration to triage or prioritise incoming calls and redirect requests appropriately. For instance, such tools have been adopted by the Department of Work and Pensions (the central government body that administers welfare benefits) and bodies such as the Driver and Vehicle Licensing Agency (see here and here); and
- The Department of Work and Pensions Universal Credit Advances Model, which is an AI-based tool for assessing risk on the payment of certain benefits to avoid welfare fraud (see here).
As a result of Brexit, the UK is outside the reach of the EU’s AI Act and initially embraced a “pro-innovation” strategy—essentially a wait-and-see approach that prioritises experimentation over ex ante regulation. On paper, traditional sources of administrative law still apply, including the common law principles of judicial review, statutory procurement rules, data protection law, equality legislation, and so on. Yet in practice, very few disputes that engage AI use in administration have reached a full judicial decision, and enforcement mechanisms seem to be struggling for meaningful traction (for a fuller analysis of this position and possible causes, see here). In the meantime, public bodies have issued a growing body of soft law guidance attempting to map existing legal norms onto the implementation of AI systems (for an example, see here). Despite the development of soft law, deep uncertainty persists about how—or whether—existing frameworks of administrative law can effectively govern AI use within public administration.
While the UK’s erstwhile light-touch, “pro-innovation” posture toward AI in public administration may have seemed defensible—a way to avoid stifling experimentation and deterring investment—it has become increasingly clear that it is an approach that carries serious risks. The absence of robust, enforceable legal regulation of AI in public administration exposes individuals to a heightened risk of failures stemming from both individual and systemic administrative errors. It also risks creating a patchwork of overlapping but different regulatory responses in different sectors and as between different regulators with overlapping but limited jurisdiction and powers. That in turn risks the emergence of different decisional practices amongst different regulators and sector-specific precedents arising from legal challenges within each.
At the same time, the significant extent of the legal uncertainty surrounding the use of AI in government is creating mounting uncertainty for institutions. Administrative bodies continue to deploy automated systems at speed and scale, even though it is difficult to speak with confidence on how a court might respond when given the opportunity to apply key parts of established law to those systems. In this environment, a single adverse ruling could have sweeping implications for government operations that are increasingly depending on these technologies—in one of the rare judicial reviews involving rule-based automated decision-making, the responsible department openly argued that an unfavourable judgment could disrupt a significant part of its operating model and give rise to major financial costs. The UK courts also face an unenviable task in deciding cases in such a context and, from this perspective, it is little surprise that Lord Sales, the incoming Deputy President of the UK Supreme Court, has been trying to point to the urgency of rethinking administrative law in this area for several years now (for his two most recent papers, see here and here). There is also a sense that political risk is rising, as an increasing number of governments around the world have seen their credibility erode or even collapse following high-profile failures of digital and automated systems—with Robodebt in Australia and SyRI in the Netherlands being prominent examples.
In light of growing recognition of these accumulating risks, the UK appears to be moving away from its earlier “wait and see” posture toward a more deliberate strategy. The policy conversation has begun to centre on crafting a distinct UK framework for regulating the use of AI in administrative action. Leading this effort appears to be the Law Commission—the independent statutory body charged with reviewing and recommending reforms to the law of England and Wales. In its recently announced Fourteenth Programme of Law Reform, the Commission underscored the gravity of the moment:
Developing a coherent legal framework to facilitate good and lawful [automated decision-making] can reasonably be described as the most significant current challenge in public law… There are legal risks and potential harms to the public and public confidence in government when [automated decision-making] goes wrong.
The Commission has committed to exploring legislative reform options, including whether the UK should adopt an overarching statutory framework or pursue more targeted reforms within specific administrative domains. Although the Law Commission operates independently, its work is typically sponsored by government departments.
In just over a year, the UK has thus moved from proclaiming a hands-off, “pro-innovation” strategy—explicitly eschewing regulation of governmental AI use—to taking the first steps towards a serious law reform initiative aimed at doing precisely that. That can, in part, be explained by a change of government in this period. However, the new government is, like its predecessor, enthusiastically accelerating AI use in administration and the growing recognition of risk therefore also explains the changing picture. If history is any guide, the Law Commission’s process will unfold over several years, involving extensive consultation. In the meantime, the UK’s institutions—courts and public bodies alike—will be left to continue to grapple with the thorny question of how to apply the existing patchwork of legal frameworks to new technologies that challenge the assumptions upon which those frameworks were erected. The conversation between US and UK systems of administrative law, as well as other jurisdictions around the world, may prove as valuable now as at any moment since the modern administrative state first took shape.
Joe Tomlinson is Professor of Administrative Law at the Dickson Poon School of Law, King’s College London, where he directs the Administrative Fairness Lab. Brendan McGurk KC is a Barrister at Monckton Chambers, London. Together they co-author Artificial Intelligence and Public Law (Hart Bloomsbury, 2025)—the first treatise on the UK law applicable to government AI use.

