Notice & Comment

The Outsourcing of Algorithmic Governance, by David S. Rubenstein

The proliferation of artificial intelligence has raised urgent questions about how the government should regulate and utilize this transformative technology. Two recent White House documents speak directly to these questions. The first is a November 2020 memorandum from the Office of Management and Budget (OMB), which provides formal guidance to federal agencies about “regulatory and non-regulatory approaches” to AI applications in the private sector. The second is Executive Order 13960, signed by President Trump on December 3, 2020, which establishes policies and principles to promote the “trustworthy” use of AI by federal agencies.

These mandates are independently significant. The government’s regulation of AI is one thing; the government’s use of AI is another. But, in key respects, this is a false dichotomy. AI systems are notoriously biased, risky, and opaque, regardless of whether they are utilized by government or private actors. If the public loses trust in AI technologies, that anxiety will not be neatly cabined into public and private spheres. Nor should the polity draw sharp distinctions. After all, the AI technologies deployed in the private sector are, by and large, the same technologies being deployed for government functions (in the areas of law enforcement, adjudication, delivery of government services, national security, resource allocation, hiring, and more).

The connections, however, run much deeper. Despite pockets of excellence, the government’s pent-up demand for AI systems far exceeds its inhouse capacity to design, develop, field, and monitor this powerful technology. Accordingly, much (if not most) of the government’s AI tools and services will be procured by contract from the technology industry. And, because AI development and deployment is virtually unregulated in the private market, the government will effectively be procuring unregulated technologies for high-stakes government applications. As recently put by the National Security Commission on Artificial Intelligence (NSCAI), the government is in “perpetual catch up mode,” with “limited control over how AI technologies are developed, shared, and used.”

The government’s technological debt is the culmination of historical reforms that are coming home to roost. Widespread bipartisan support to “shrink” and “reinvent” the federal government in the 1990s were propelled by aspirations to make government more business-like and efficient. These reforms yielded certain successes. But decades of federal hiring caps, cuts, and freezes has left the federal government with little choice but to use contract and grant employees to achieve its goals.

Over the same stretch of time, federal spending on research and development (R&D) for new technologies declined precipitously. The technology waves that ushered in the home computer, internet, and the internet-of-things (phones, tablets, and gadgets), was mostly sourced with private capital, free and clear of property rights that the government would otherwise enjoy if the technologies were publicly funded.

The government’s market dependencies are not inherently problematic. But it is highly concerning and requires immediate attention. Acquiring AI is not business as usual. When procured from private vendors, AI systems may be shrouded in trade secrecy, which can impede public transparency and accountability. One way that our system holds government actors accountable is through judicial review. The use of AI in government decisions, however, can stymie a court’s ability to probe the reasons for government action if the agency (or its vendors) cannot adequately explain why an AI tool made a particular prediction or classification that led to a government decision. Moreover, government watchdogs, journalists, and stakeholders may also be constrained in their ability to “look under the hood” of AI tools affecting the polity’s rights and interests. This not only shutters out stakeholder input; it can also breed public distrust in government uses of the technology.

Beyond these concerns lies another: AI systems are embedded with value-laden decisions about what is technically feasible, socially acceptable, economically viable, and legally permissible. Thus, through procurement channels, the government will be acquiring value-laden products from private actors whose financial motivations and legal sensitivities may not align with the government or the people it serves.

This puts Trump’s recent Executive Order into proper context, which is tellingly titled: “Promoting the Trustworthy Use of Artificial Intelligence in the Federal Government.” Among other things, the Executive Order establishes a “common set of Principles … designed to foster public trust and confidence” in the use of AI. The first principle instructs agencies to “design, develop, acquire, and use AI in a manner that exhibits due respect for our Nation’s values and is consistent with the Constitution and all other applicable laws and policies, including those addressing privacy, civil rights, and civil liberties.”

As matters currently stand, however, the government is investing huge amounts of taxpayer dollars to procure unregulated AI systems that may be unusable, either because they are untrustworthy, or because they are unlawful. For example, if the government cannot explain how an AI system works, then it may run afoul of constitutional due process or administrative law principles. Even if an AI system clears those thresholds, it may still violate federal anti-discrimination laws, privacy laws, and domain-specific strictures across the regulatory spectrum. Litigation will no doubt surface these tensions; indeed, it already has.

A burgeoning scholarship has emerged to meet the challenges of algorithmic governance. Most of the public law scholarship to date has focused on the tensions between algorithmic governance and the constitutional rights of due process and equal protection. The Administrative Conference (ACUS), and a cadre of scholars, have also begun the important work of squaring algorithmic governance with administrative law principles.

Federal procurement law, however, remains a dangerous blind spot in the reformist agenda. To be clear: there is growing awareness that acquisition channels and a procurement mindset can exacerbate the risks of algorithmic governance (for example, in Smart Cities and more generally). Still, the positive case for how procurement law can be retrofitted to mitigate those risks has yet to permeate academic imagination. My hope is that will change.

I argue in a forthcoming law review article that federal procurement law is uniquely situated and well suited to serve as a checkpoint and catalyst for ethical algorithmic governance. More than a marketplace, the AI acquisition gateway can and must be reimagined as a policymaking space.

From all sectors, it is widely acknowledged that broad acceptance and adoption of AI systems relies on trust, which in turn depends on transparent, accountable, fair, and responsible uses of this technology. More than just lofty ideals, ethical AI is a type of currency that speaks loudly in the marketplace. Every major technology company has teams of high-skilled workers and mounds of investment capital dedicated to “ethical AI.” And the government, for its part, is investing huge sums of money into related R&D (although much more is needed).

Still, translating ethical AI principles into practice is a major challenge. With proper tooling, federal procurement law can be harnessed in service of that mission. Foremost, centering ethical AI throughout the procurement process will force agency officials and vendors to think more critically—and competitively—about the AI tools passing through the acquisition gateway. Moreover, agencies can be required to prepare pre-acquisition risk assessments specifically tailored for AI procurements. These assessments can be used and updated throughout the procurement process to help manage a portfolio of AI risks relating to transparency, accountability, data privacy, security, and algorithmic bias.

Federal lawmakers could also require agencies to explicitly account for these risk factors in market solicitations and contractual awards. Doing so can be directly and indirectly beneficial. Most directly, the answers by market participants will enable the agency to make side-by-side comparisons of the risks associated with a particular vendor relative to the field. Anticipating this, strategic and innovative vendors will compete for an ethical edge. In some instances, the agency might even find opportunities for collaboration—for example, between two or more startup enterprises—to mitigate the overall risk based on the vendors’ strengths and weaknesses. Less directly, yet as importantly, the government’s purchasing power and virtue signaling can spur market innovation and galvanize public trust in AI technologies.

While the future of AI is anything but certain, a stack of pending legislative bills, widespread “techlash,” and the proliferation of ethical AI frameworks, converge around a simple truth: Awful AI does not sell—politically or commercially. That shared reality is a foundation for principled and pragmatic regulatory compromise through the procurement system. Certainly, procurement law will not solve the many challenges of algorithmic governance. Just as surely, the challenges of algorithmic governance cannot be solved without procurement law.

David S. Rubenstein is the James R. Ahrens Chair in Constitutional Law and Director, Robert Dole Center for Law and Government, Washburn University School of Law.

Print Friendly, PDF & Email