Reflections on Automated Agencies, by Nina E. Olson
This post is part of Notice & Comment’s symposium on Joshua D. Blank and Leigh Osofsky’s Automated Agencies: The Transformation of Government Guidance. For other posts in the series, click here.
In Automated Agencies: The Transformation of Government Guidance, the authors make a strong case for a principled approach toward agencies’ use of automated guidance; such an approach would include transparency, continued oversight and analysis of output, and rigorous attention to the disparate impact the guidance might have on various populations. One of the central premises of the book is that automated guidance can exacerbate the gap between those who can afford representation and professional advice and those who cannot. Automated guidance, then, is a specific example of the risk that digitalization of agencies poses in general, namely that wealthy individuals and large corporations receive highly personal and individualized “concierge” service and everyone else is left to the depersonalized, generalized, and remote offerings. In the context of U.S. tax administration this approach not only raises issues of fairness and equity but also undermines tax compliance, which relies on over 180 million individual and business taxpayers’ willingness to file tax returns without resort to coercion. The authors propose several approaches to responsible use of automated guidance to mitigate this corrosive gap – including transparency, inclusion, and fairness.
As National Taxpayer Advocate at the Internal Revenue Service, I witnessed many instances where the use of automated guidance harmed taxpayers. As the authors note, the lack of transparency regarding the assumptions underlying decision-trees or models, which would be apparent in written instructions to staff like the IRS Internal Revenue Manual, makes effective oversight extraordinarily difficult. Biased assumptions, often embedded in the operational data used to train models, only become apparent when the negative consequences attributable to actions based on the model’s output have reached a level that is impossible to ignore. By then the harm to taxpayers has already occurred, and the agency is in a defensive posture.
The speed and agility with which policy changes can be executed is often cited as a major benefit of automation. These traits, however, can create pressure to bypass necessary and thoughtful review and discussion before implementation of changes. One example that will be etched in my brain forever is that which I shall call the “attainment of age” debacle.[1] There are over 75 statutory provisions in the Internal Revenue Code that use this phrase. Under the common law, articulated by the court of common pleas in England in 1677, a person attains the next age on the day before their birthday, i.e., the day before the anniversary of their birth (the “common law rule”).[2] On the other hand, most people believe they attain the next age on their birthday (the “birthday rule”).
The interpretation of this phrase may make the difference between a taxpayer born on December 31 of any given year being eligible for a tax benefit or incurring greater tax liabilities. A child’s age is a fundamental eligibility requirement for tax benefits including the refundable Earned Income Tax Credit, the Child Tax Credit, and the Child and Dependent Care Credit. A person’s age will determine eligibility for the tax credit for the elderly. Historically, the IRS has applied the “birthday rule” where it benefits the taxpayer, such as with respect to the EITC or dependency exemption, and applied the “common law rule” where it benefits the taxpayer, including the tax credit for the elderly.
In 2002, the IRS decided that consistency was more important than common understanding, and without any prior public notification of this change to the IRS’s longstanding position, directed IRS programmers to program the common law rule into its return processing system for purposes of EITC, the Child Tax Credit, and several other provisions. The IRS also advised software companies to make the changes to their upcoming 2003 filing season’s tax packages.
According to UNICEF, over 10,000 people are born each year on December 31.[3] Imagine parents’ surprise when they found their tax software was refusing to let them claim their child as a dependent or for the Earned Income Credit because they attained the age of 19 (or 24, if a full-time student) on December 31, despite the child being born on January 1. In fact, the complaints started rolling in to the Taxpayer Advocate Service (TAS) during the filing season, and before we could get to the bottom of the issue, over 40,000 tax returns had been negatively impacted, requiring the agency to take corrective action.[4]
This example may seem like a rogue event, but I posit that events like this are happening every day. Had that language been included in a notice of proposed rulemaking, the public would certainly have commented on its negative impact. Inclusion in internal agency guidance such as the Internal Revenue Manual would require review by the IRS Chief Counsel and subject matter experts in TAS and other IRS functions. Instead, it was conveyed in a non-transparent manner and invisibly programmed. The good news is that the error was discovered very quickly, partly because it affected so many people and was a deviation from a decades-old interpretation that all taxpayers relied upon. The bad news is that there are many such interpretations and assumptions embedded in automated programming that have significant impact only on a few people and thus never get the attention required to fix it.
There is another downstream consequence of automated guidance, namely the “dumbing down” of agency employees. Consider the example of the IRS “reasonable cause assistant,” or RCA. The U.S. Internal Revenue Code includes civil penalties for many infractions, including late filing of returns or late payment of tax. Many of these penalty provisions include a “reasonable cause” exception, allowing for penalty abatement. To its credit, a couple of decades ago the IRS created a decision-tree application that was based on the case law around application of reasonable cause. The RCA, as it became known, enabled thousands more employees to make a penalty-abatement decision in real-time, even over the phone. It was time-saving and efficient, and was intended to increase accuracy and consistency in penalty abatement decisions. The RCA also included an override feature for those occasions when an employee believed the application arrived at the wrong answer, allowing employees to elevate cases of first impression or potentially erroneous results and obtain individualized advice. Theoretically, the RCA had a built-in feedback loop that would ensure it incorporated future learnings.
Unfortunately, as with so many cost-saving automated advances, the savings were not put back into continued training of employees about the legal bases for reasonable cause. By expanding the use of RCA beyond trained auditors to legions of phone assistors untrained in the underlying law, the knowledge base necessary for identifying override cases was absent. A 2010 IRS Usability Study found that in 55 percent of the cases, users reached the wrong determination, despite the users believing they were accurate in 100 percent of the cases.[5]
This, to me, is the extraordinary risk inherent in automated guidance. Not only will it be mostly targeted to and used by populations who do not have access to paid lobbyists or representatives who can point out the problems with the guidance, but also the pressure on all agencies to reduce costs will result in an untrained agency workforce that is itself reliant on these automated tools and no longer has the knowledge to independently reason. Such expertise will exist only at the concierge level. Moreover, the lack of transparency in formulating this guidance coupled with the speed and invisibility with which changes to guidance can occur creates risk of errors that can harm populations at massive scale.
It doesn’t have to be this way. First, agencies need to make the case, clearly, that a trained and educated workforce is essential for fair and just application of automated tools. Second, to combat the bifurcation of assistance, agencies must recognize the right to human intervention and design their automated systems to effectuate that right. Third, agencies should establish an oversight board that includes ombuds and advocates as well as subject matter experts and technologists, supported by reviewers from throughout the agency who periodically review the underlying assumptions and output to identify inaccuracies and unintended consequences. Fourth, a framework to analyze if automated guidance is likely to disparately impact vulnerable populations, including low income and other un- or under-represented populations. Fifth, the agency should make public the policy decisions incorporated into its automated guidance, subject to exceptions similar to those under the Freedom of Information Act.
Finally, agencies should put themselves in the shoes of the population they serve. I have written about the concept of the Taxpayer Anxiety Index, in which one walks through automated procedures (including telephone trees) and notes how one’s anxiety (frustration) level increases.[6] This is more than just a thought experiment. Rather, it tells us when the need for human intervention – from the taxpayer’s perspective – outweighs the agency’s desire for “efficiency” and cost-savings. It restores dignity to the individual. Indeed, machines can learn when these points are reached, if only we tell them to notice and make the hand-off. This, to me, is real efficiency: using automation to resolve the easy, run of the mill issues but also to recognize those questions or issues that require human interaction and proactively offer that intervention. In short, agencies should adopt a humanist approach to automation; this would help mitigate the risks attendant with digitalization.
Nina E. Olson is the Executive Director of the Center for Taxpayer Rights. From March 2001 to July 2019, she served as the National Taxpayer Advocate of the United States.
[1] This example is discussed at length in National Taxpayer Advocate, 2003 Annual Report to Congress, Legislative Recommendation: Attainment of Age Definition 308-311.
[2] Nichols v. Ramsel (1677), 2 Mod. 280, 86 Eng. Rep. 1072.
[3] https://www.unicefusa.org/press/new-years-babies-over-10000-children-will-be-born-united-states-new-years-day-unicef-usa.
[4] The IRS quickly issued a revenue ruling adopting the birthday rule for purposes of eight Internal Revenue Code provisions, including the EITC, CTC, dependency exemptions, and the child and dependent care credit. Rev. Rul. 2003-72.
[5] For an in-depth discussion of the development of the RCA and the Usability Study, see National Taxpayer Advocate, 2005 Annual Report to Congress, Most Serious Problem: Reasonable Cause Assistant 357-368, and National Taxpayer Advocate, 2010 Annual Report to Congress, Most Serious Problem: The IRS’s Over-Reliance on its Reasonable Cause Assistant Leads to Inaccurate Penalty Abatement Determinations 198-210.
[6] National Taxpayer Advocate, 2018 Annual Report to Congress, Preface i.

