Notice & Comment

The Machine in the Mirror, by Stephanie Bornstein

*This is the eighth post in a symposium on Orly Lobel’s The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future, selected by The Economist as a best book of 2022. All posts from this symposium can be found here. Further reviews can be found at ScienceThe Economist, and Kirkus.

There is something both wonderful and troubling about automated curation, a feature of many of today’s technological tools.  Spotify, Netflix, and Google Discover stand ready with recommendations for the next album, movie, or news article for you based on what you’ve already consumed.  Often the algorithms get things right; sometimes they over-value a new pop song, sappy rom-com, or celebrity break-up story on which you clicked.

Our machines and the algorithms that run on them are a reflection of ourselves.  As Orly Lobel explores in her compelling new book, The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Futurethat’s both the reason for their promise and the source of their limitation.  Lobel paints an optimistic vision of our digital future, calling for us to embrace and improve the digital technology that pervades our work, home, and love lives.  And for good reason: as decades of social scientific evidence has documented, “[h]uman decision-making is inherently limited, flawed, and biased” (p. 5), perpetuating historical inequality and disadvantage.  Lobel contends that we should not let the perfect be the enemy of the good, or of progress; instead, we should focus on the “potential and ability” of digital technology to “do better than our current systems” (p. 5).  She acknowledges that, left unchecked, digital technology reflects our own biases back to us, often magnifying and distorting existing disadvantage around race, gender, and other status characteristics.  Yet Lobel makes a convincing case that, “despite its risks and flaws, digitization can and must become a powerful force for societal good” (p. 4).

When it comes to digital technology in workplace hiring and pay-setting, it may, ironically, be our efforts to correct machine errors that helps us improve ourselves.  Despite efforts over six decades since Congress enacted laws to outlaw sex, race, and other protected class discrimination in employment (Title VII of the Civil Rights Act of 1964) and sex discrimination in compensation (Equal Pay Act of 1963), our economy is still marked by deep inequality.  The overall gender pay gap has been stuck at around 20 percent for two decades; the racial pay gap is worse than it was in the late 1970s.  Both gaps are due, in significant part, to workplace segregation by gender and race in the types of industries in which workers work and the level of seniority they have within workplaces.  No doubt, we have made major strides since the mid-century.  But workplace integration and advances in economic equality have slowed to a crawl, with miles left to go.

Enter technology and the chance to rein in and even counter the subjective decisionmaking that perpetuates existing privilege—if we can only harness the technology correctly.  First, we should use the right tool for the task.  As Lobel describes it, “we need to separate technological capability from its function,” and “to actively select uses of technology that empower and build a better society” (p. 7).  For example, when using algorithms in hiring, sometimes tools that mask candidates’ protected characteristics helps reduce hiring bias.  Lobel offers the example of technology companies like Pymetrics that use video games or simulations to test candidates for qualities the employer values.  By hiding candidates’ identity characteristics, such technologies claim to make employer evaluations of competence more objective, leading to more diverse hiring (pp. 61-64).  Yet other tools, like algorithms that screen applicant resumes to match models of successful employees, risk reproducing and magnifying bias, as in the example of Amazon’s abandoned hiring algorithm that learned, by training on data of mostly male employees, to code female identity markers negatively (pp. 56-61). 

Second, we should build better tools.  Lobel explains the current movement toward “fairness through awareness,” and the “consensus…emerging among computer scientists that…to promote equality and fairness…it is better to direct an algorithm’s outputs than to restrict its inputs” (p. 30).  Instead of masking, this may require affirmative consideration of status characteristics like race or gender to interpret data more fairly (pp. 301-302).  Just as ignoring race and gender in the name of formal equality leaves us substantively unequal in real life, doing so in our data can result in the same harms.  Data is not neutral and “[a]lgorithms will replicate past wrong unless we explicitly direct them not to” (p. 7).  Luckily, as Lobel documents in vivid detail and in a broad variety of fields, data scientists and technologists are doing this, and it is starting to pay off.  For example, Lobel cites job recruiting platform Blendoor that “trained its algorithm with performance metrics rather than historical resume data of those who had previously been hired,” which the company reported increased hiring of female and racial minority candidates (pp. 65-66).  Indeed, The Equality Machine provides a valuable contribution to the literature with its detailed accounting of real-life progress being made in the private sector, often missing from legal scholarship.  

Lobel frames her vision of the equalizing power of technology as a counternarrative to the many scholars who have sounded the warning bell about runaway AI and algorithmic discrimination.  It is true that, in the area of work law, early scholarship focused more on risks than possibilities.  In foundational scholarship starting around 2015, legal scholars cautioned that using algorithms in hiring and promotion could replicate and exacerbate bias, creating disparate impacts by race and gender that were insulated from liability as essentially self-validating.  

In my own work, I went so far as to propose using a disparate treatment theory of discrimination when employers either recklessly failed to account for implicit bias (for example, by ignoring available algorithmic tools) or used the wrong types of algorithmic tools (which could exacerbate stereotyping).  Yet like Lobel, I believed the train had already left the station.  My goal was not to stop progress, but to encourage the right kind of progress.  Likewise, over the past eight years, leading scholars in the field (including Pauline KimAndrew SelbstIfeoma AjunwaMatthew Bodie, and many more) have evolved from sounding the warning cry about algorithmic discrimination to helping to build an action plan to correct it, offering legal parameters for improvement, like affirmatively considering status characteristics and developing law and procedures for better monitoring, accountability, and transparency.

Of course, whether you see the glass as half empty or as half full, it is still half a glass.  As upbeat as Lobel is about the prospect of a bright technological future, she also makes clear that it will take work to make digitization foster equality.  She describes the need for regular monitoring and auditing supported by law and policy that both encourages and requires accountability—a topic she weaves skillfully throughout the book and addresses more directly in the Epilogue (p. 68-71, 294-298).  

It is in doing this work that real progress can be made.  At least until we reach the singularity, algorithms are formulas that humans create, and artificial intelligence is still artificial.  The humans in charge of the machines have to make them behave better, which requires us to confront our own limitations and correct our own behaviors—whether it be through algorithmic affirmative action, counter-stereotypical programming of digital assistants, or any of the many promising efforts Lobel details.  Humans are the ones building the equality machine: we have to teach it how to achieve greater equality and, in doing so, hopefully learn some lessons ourselves.  

Stephanie Bornstein is the Irving Cypen Professor of Law at the University of Florida Levin College of Law.

Print Friendly, PDF & Email