Notice & Comment

Cycles and Loops: Human Actors in Lobel’s “The Equality Machine,” by Pallavi Bugga & W. Nicholson Price II

*This is the second post in a symposium on Orly Lobel’s The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future, selected by The Economist as a best book of 2022. All posts from this symposium can be found here. Further reviews can be found at ScienceThe Economist, and Kirkus.

In the midst of abundant scholarly criticism that artificial intelligence (AI) is deeply and profoundly biased, Orly Lobel’s Equality Machine presents a fascinating case for the potential of AI systems to foster societal good and reduce inequality and inequity. Lobel skillfully deploys examples taken from a diversity of fields—from healthcare to law enforcement to employee recruitment and well beyond—to demonstrate how AI can leverage big data and extract underlying patterns to shed light on problems otherwise unseen or unrecognized. But AI not only can see systemic disparities; Lobel argues it can actively do things to help ameliorate them. Nevertheless, as Lobel notes, mitigating bias is not inherent to AI, nor is it straightforward. While some AI systems can be specifically leveraged to recognize and resolve biases, others may simply replicate pre-existing human biases. 

The ways in which AI interacts with bias are complex and diverse, as The Equality Machine illustrates without explicitly taxonomizing. Throughout her book, Lobel touches upon three distinct intersections of AI and bias: 1) AI that itself is biased; 2) AI that is less biased than humans (perhaps even unbiased!) and can therefore mitigate human bias; and 3) AI that can identify human bias (or even other AI bias, in the case of AI de-biasing algorithms). The first is the basis of much (justifiable) critique of AI, and the book would be naïve not to mention them; the second and third, though, are where Lobel shines a usefully sharpening light against the current gloomy zeitgeist. It’s worth reminding ourselves that AI, despite its many flaws and potential biases, has an awful lot of potential to make things better, more equal, and more just.

The reader is left with many delightful threads to pull and ponder. We focus on one that largely lies below the surface: the patterns of bias when AI and humans interact over time. 

Whether humans are creating, supervising, or simply using AI systems, the interaction of humans with the AI system has the potential to modulate bias in AI in significant ways. One of us has recently written about these human-AI dynamics over time: exclusion cycles in medical AI. A bit of background. The exclusion cycle theory (developed by Ana Bracic in the social exclusion context) suggests that dominant-group (or anti-minority) behaviors are intractably interlinked with minoritized-group responses to those behaviors in ways that are cyclical and self-reinforcing. This can happen at a purely human level, demonstrating the kind of bias Lobel hopes AI can help resolve. In medicine, for instance, implicit biases on part of physicians can lead to discriminatory treatment of certain minoritized patients, upon which minoritized patients withdraw or self-advocate in response; physicians may then characterize those patients as noncompliant or distrustful, perpetuating and accentuating the physicians’ initial anti-minority beliefs. Equality-Machine-esque AI to the rescue? Alas, it’s more complicated.

AI interacting dynamically with human systems may amplify bias rather than fixing it, and that problem doesn’t depend on the AI being initially biased. To see this more clearly, consider the three AI-bias interactions Lobel illustrates within the framework of the exclusion cycle theory. When an AI system is biased (perhaps trained on non-representative data, or just reflecting underlying medical system biases), bias will propagate regardless of whether or not the humans using the AI system are themselves biased. Biased AI systems will produce lower-quality results for minoritized patients, leading more minoritized patients to distrust the AI and withdraw from care, in turn leading to lower representation of minoritized patients in big data—and to the AI system potentially encoding those patient responses as it continues to learn and evolve, thereby perpetuating its own algorithmic bias. Unfortunately, because the results of the medical AI are conveyed to the patient via the physician, biased AI systems can perpetuate both AI-based exclusion cycles and purely physician-patient, human-only cycles. 

But biased AI is an easy (if depressingly common) case for problematic dynamics. How about when the AI is less biased than humans? Unfortunately, reinforcement of both the physician-patient and AI exclusion cycles may arise even when the AI itself is unbiased relative to humans because AI systems are trained on data created and compiled from biased human behavior, and there is bias embedded in the practice of medicine today. Thus, while an algorithm itself may be designed not to perpetuate bias, or to be neutral, it is likely to mirror and potentially amplify pre-existing human bias as it continuously evolves to reflect new data and observations. This amplification can increase over time, creating a difficult-to-observe vicious cycle as an unbiased AI system is gradually tainted by the human bias it encounters and learns from. 

The picture is a bit rosier for AI systems that can identify bias in otherwise complex patterns (an ability Lobel rightly lauds), or can de-bias other AI systems. To be sure, there’s a baseline challenge: given that all medical data is generated by human physicians and is thus subject to human bias, what is the standard for “unbiased care?” Can we know when a medical AI is unbiased absent unbiased care in practice? But there are hints that AI can help unravel just this issue—see, for instance, a delightful paper using AI to unearth biases in how human physicians evaluate knee pain. But the human dynamics issue arises again; if AI can identify biases, but biased human physicians wield final decisionmaking power and may override AI recommendations, the de-biasing benefit might be slow to materialize.

Stepping back from the specifics of medical exclusion, human-AI interactions, static or dynamic, are hard to model and to regulate, whether considering bias, performance, or frankly anything else. Lobel suggests that a hybrid model—which takes the best of humans and machines—may be preferable to purely human or algorithmic decision-making. The best of both worlds is a wonderful goal, but often illusory. What if humans in the loop of algorithmic systems impede algorithmic efficiency, undermine consistency, or generate more bias than would have been present? Or alternatively, what if automation bias causes the human in the loop to defer to an AI when human input could have been beneficial? What if they just interact in unexpected ways? These are tricky challenges that will themselves need to be tackled iteratively.

But, as Lobel notes, it is progress—not perfection—that is the goal for AI. Though much is still unknown in terms of the roles humans play in modulating bias in AI, further investigations may uncover beneficial interactions between humans and algorithmic systems. Perhaps, as AI become increasingly debiased, we can design and shape human-AI interactions that will create virtuous cycles, wherein AI debiasing not only compensates for human bias, but can even decrease it over time. These virtuous cycles are unlikely to occur by accident; they’ll require careful thought and design. Along the way, AI may be able first to point out bias problems, and perhaps to augment or substitute for more-biased humans—with a watchful eye to avoid self-reinforcing exclusion cycles. Ideally, smart policy choices can help both humans and machines become agents of equality, moving us closer to a fairer and more inclusive future.

Pallavi Bugga is a JD student and Nicholson Price is a Professor of Law, both at the University of Michigan. 

Print Friendly, PDF & Email