Notice & Comment

Artificial Intelligence in Government Services

Dan Ho and I have an op-ed in The Hill today criticizing the Biden administration’s proposed rules for using artificial intelligence in government services.

[W]here the rules could go astray is in their breadth. The rules prohibit the use of AI in “government services” unless agencies comply with an extensive set of procedures. Those procedures include public consultation with outside groups, the creation of a mechanism to appeal the AI’s decision, and a requirement to allow individuals to opt out of AI review.

Imposing those requirements across the board — to all government services — would be a huge blunder

To see why, keep in mind that categorization is a big part of what government does. The Internal Revenue Service has to decide which tax returns have math errors and require follow-up. The Social Security Administration must distinguish claimants who are disabled from those who are not. Veterans Administration physicians have to figure out what medical tests to order for which patients. And so on.

All these tasks … require the government to categorize things that, at first blush, look pretty similar. And the government makes millions of categorization decisions every year. As one VA official says, “Right now, each time you breathe out, the VA just produced an expert medical opinion on a claim.”

Historically, humans have made those decisions. But humans are expensive, fallible and slow. Sometimes, AI can help do the job faster, more accurately, and at less taxpayer expense.

This is already happening. Veterans Administration hospitals use videos to quickly detect patient seizures. The Social Security Administration employs a tool to help its judges spot errors in draft decisions for disability benefits. And, since 1965, the Postal Service has relied on a crude form of AI to read ZIP codes to route letters and verify postage.

The new rules could put such modernization efforts in jeopardy.

As Dan and I see it, the Biden administration’s hypercautious approach to AI is dispiritingly common. Anxiety that the government might do bad things leads to the adoption of inflexible procedural rules that make it impossible for the government to do good things. I call it the procedure fetish: we’ve lost sight of the fact that red tape can be as bad for government as it is bad for business.

You can get more details in a letter that Dan and his colleagues at Stanford’s RegLab sent to OMB. Or you can read Dan’s recent testimony before the U.S. House of Representatives: “[P]rocess must be tailored to risk. For example, the memo’s proposal that agencies allow everyone to opt out of AI for human review does not always make sense, given the sheer variety of programs and uses of AI. … Denials of SNAP benefits, for instance, are inaccurate 44% of the time. The government cannot ‘human’ its way out of these problems.”

Fortunately, the rules aren’t yet set in stone. Here’s hoping the Biden administration rethinks its approach.

Print Friendly, PDF & Email