Notice & Comment

Public-Private AI Governance Partnerships, by Elena Chachko

*This is the seventh post in a symposium on Orly Lobel’s The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future, selected by The Economist as a best book of 2022. All posts from this symposium can be found here. Further reviews can be found at ScienceThe Economist, and Kirkus.

Orly Lobel’s The Equality Machine invites us to shift the lens on artificial intelligence. Instead of focusing on the harms that AI may inflict and the biases that it replicates, Lobel reimagines AI as a force for a better, more equal society. Unlike human judgement, she argues, AI flaws and biases can be pinpointed and fixed. Nor does AI render humanity obsolete as critics fear. AI can actually “help us be more human” by compensating for our weaknesses and blind spots. 

Lobel acknowledges, however, that realizing AI’s potential to promote equality rather than perpetuate inequality requires work. Only through constant learning, iteration, and building on AI’s comparative advantage to human decision-making will we truly be able to progressively build “equality machines”. Lobel’s project is therefore not merely reframing what she views as a myopic, pessimistic conversation. It offers a characteristically comprehensive road map for governing AI across different fields. The breadth of her discussion is staggering. It includes work law, activism and organizing, media, gender equality, human trafficking, health and medicine, intimate relationships, services, media and education. 

I focus in this short comment on one of the main governance pathways that Lobel identifies for promoting ethical and equitable AI. She highlights the importance of private-public partnerships that would create “regulatory standards that oversee the ethics of algorithms” (at 297). In her vision regulators will incentivize, support and supervise self-assessment by private actors of their own automated decision systems. They will also invest in building hubs of AI auditing expertise. Long before “The Equality Machine”, Lobel was one of the early proponents of shifting from traditional command-and-control regulation to “new governance” regulatory methods. New governance tools allow flexibility and cooperation between government and regulated industries around fast evolving and complex challenges. They are particularly useful for tech governance, where the fast pace of change and progress leaves traditional rigid regulation ineffective and constantly struggling to catch up.  

Public-private partnerships are a promising way of thinking about AI regulation, and I would be curious to read more from Lobel about concrete elements of effective public-private models for governing AI across the areas she covers. Yet recent experience with public-private partnerships for tech governance urges caution in adopting this model of regulation across the board, especially when it comes to security and surveillance. 

In my own work I have considered how public-private partnerships function in tech platform geopolitical and security governance. The basic story is now fairly familiar. From 2016 onward, a “techlash” triggered by platforms’ failure to mitigate real-world harms they helped facilitate pushed major players like Google, Facebook, Microsoft, Twitter (pre-Musk) and others to scale up their security and geopolitical policy development and operations. They replicated traditional government structures and policy methods in doing so. They also created or joined mechanisms for bilateral and multilateral cooperation with governments.

Among other things, platforms now designate “dangerous” organizations and individuals much like governments designate alleged terrorists. They have developed policies to weed out fake accounts and counter information operations backed by foreign governments. They participate in an international online counterterrorism organization—the Global Internet Forum to Counter Terrorism (GIFCT)—to combat online terrorism. They have worked with governments to protect election security in recent major global elections. For instance, a platform-government election security working group met regularly in the runup to the 2020 U.S. elections to exchange analysis and best practices. And most recently, platforms took an active part in the international response to the Russian invasion of Ukraine. 

I called this platforms’ “geopolitical turn”. AI plays a central role in platform-assisted geopolitical and security harms as well as platform efforts to address them. For example, the case of Gonzalez v. Google, currently pending before the Supreme Court, addresses the role of platform recommendation algorithms in facilitating terrorism and related liability issues under section 230 of the Communications Decency Act. Much has been said and written about how AI is used in content moderation and for identifying unusual user behavior patterns in the context of tech company trust and safety work. 

My analysis of emerging public-private partnerships in platform security and geopolitical governance reveals a mixed record. These partnerships are arguably inevitable 

in a networked world in which private actors control a key theater in which national security and geopolitical threats manifest. Tech companies are better than the government at identifying issues with their own products and services. They can detect threats in real time. They know their technology and response options. And they are generally nimbler than government agencies. Moreover, particularly in the United States, constitutional and other legal constraints limit governments’ ability to control what happens on global private networks and what private actors do. And there are other advantages for governments in working through private actors, such as avoiding direct confrontation with foreign nations. Tech companies, on their part, need access to government expertise and intelligence to support their own efforts. Their relatively small (and recently shrinking) trust and safety teams cannot replace the vast national security bureaucracies and decades of national security tradecraft accumulated in government agencies.

Nonetheless, these partnerships also exacerbate biases, errors and inequalities. Many government-tech company exchanges around national security are by nature secret and unaccountable. We know very little about them. It is an environment ripe for abuse. Platform AI biases and errors in singling out and sanctioning users exacerbate well-documented flaws in government national security practices like blacklisting and electronic surveillance, including process failures and disproportionate targeting of certain groups. There is little public evidence that government-platform exchanges around security and geopolitical matters to date have tackled the performance and ethics of platform AI in this area in any meaningful way. Government-platform interests often align around security matters, and each can serve as a power multiplier for the other in ways that prioritize perceived security threat-prevention over equal and fair treatment of individual users.   

The lesson that emerges from that analysis is that public-private partnerships are not a panacea for tech regulation. Their impact on the regulated industry is complex. At their worst, they exacerbate biases and patterns that discriminate and harm individuals by bringing together behind closed doors powerful private and government actors whose interests frequently align in ways that diminish equality and fairness considerations. It is therefore easy to be pessimistic about the effectiveness of such partnerships in pushing for more ethical AI. 

Yet Lobel urges us to focus on the positive while working to improve what needs fixing. Tech platforms are now important geopolitical actors, and the tech-government mutual dependence here requires some level of cooperation. Institutionalized public-private partnerships in platform security and geopolitical governance are better than even less accountable inevitable ad-hoc interactions around specific threats. They may facilitate some degree of mutual accountability and provide a vehicle for government actors to inject public law norms and values into the security work of tech companies over time. 

Recognizing this, there is also much to improve. One major task is making government-tech industry exchanges around national security more transparent and accountable to outside stakeholders. Another monumental task is making the ethics of tech platform algorithmic decision-making in security and geopolitical matters a far bigger part of both the public and regulatory conversation than it currently is. These objectives may seem like a pie in the sky. But Lobel shows us that more ethical, equitable AI is not beyond our reach. 

Elena Chachko is the inaugural Rappaport Fellow at Harvard Law School.

Print Friendly, PDF & Email