Liability
Liability
Liability
Susan C. Morse*
INTRODUCTION
Centralized systems make more and more legal decisions. Often this
happens because machines, like computers, apply law. For instance,
computer programs prepare tax returns, certify compliance with
environmental regulation, keep wage and hour records, and take down
material allegedly subject to copyright protection. Should automated law
systems be directly liable for the errors they make?
The burden of errors made by automated law systems often falls on third
parties, not on the user or the maker of the automated law system. For
instance, if a taxpayer understates taxable income because of an error made
by tax preparation software, and the error goes undetected (which is likely),
then the group that bears the burden of that error is the general taxpaying
public. If a polluter underreports emissions of a harmful atmospheric
chemical, and the error goes undetected (which is likely), then the group that
bears the burden of the error is the general air-breathing public. Indeed there
is a clear incentive for an automated law system to commit errors that benefit
its users at the expense of third parties, so long as those errors will probably
not be detected.
*
8-Nov-17] AUTOMATED LAW LIABILITY 2
mistake? And when is an error caused by automated law system, so that the
error is “its mistake”?
Part III explains how the market for automated law products might
change as a result of the imposition of automated law liability. One likely
result is differentiation between more and less aggressive products. Another
is a framework of bonding and/or reinsurance, especially since appropriate
regulations might require proof of creditworthiness before accepting filings
prepared by an automated system. A third possibility is that the proposed
burden of liability might be so heavy that it would discourage the private
development of automated law products, and leave it to the less
technologically adept government to develop automated law.
Part IV considers the fact that automated law systems can be developed
by government, as well as by private firms. In either case, correcting system
errors is important to the quality of the resulting law. The direct liability
regime outlined in Part II and Part III uses government as the external check
on automated law systems developed by private firms. A government
automated law system presents the converse challenge: how can private
parties be empowered to identify and remedy errors made by the government
system?
A. Underenforcement
B. Negative Externalities
In regulation and compliance, third parties – not the user, and not the
system – often bear the burden of automated law system errors.1 Unless
1
Cf. Oren Bracha & Frank Pasquale, Federal Search Commission? Access, Fairness,
and Accountability in the Law of Search, 93 Cornell L. Rev. 1149, 1185-86 (2008) (noting
the problem of possible negative externalities resulting from the filtering and organizing of
search results by algorithms).
8-Nov-17] AUTOMATED LAW LIABILITY 4
C. Reporting Positions
2
An underlying assumption is that the regulatory policy is in some sense wise or correct,
i.e., that error-free compliance would properly measure time worked, impose tax liability,
set environmental pollutants and so forth.
3
Linda Sugin
8-Nov-17] AUTOMATED LAW LIABILITY 5
• There are good things about tax return preparation software etc.,
including the saving of taxpayer time that it supports.
• Some but not all programs give aggressive advice.
• Regardless of how an error arises, the software under current law
is not necessarily involved in the resolution of the error. Say tax
software prepares a tax return with a legal error. The taxpayer
files the return. The government will rarely discover the legal
error, because its audit coverage is so low. If it does discover the
error, it will hold the taxpayer responsible.4 The taxpayer will be
required to pay back taxes, interest, and, only sometimes,
penalties. The magnitude of penalties (setting aside criminal
penalties) falls far short of the multiplied damages amount that
would be necessary to offset the low likelihood of detection.
• There is an incentive for the program to give aggressive advice –
perhaps then it could charge more. [Evidence that programs
compete on how much refund they can provide?]
• But this is not different from the incentive of other tax preparers
to give aggressive advice.
• The difference is the ease of enforcement allowed by a centralized
system of liabilities. Builds on some discrete strategies e.g. tax
shelter promoter regs that target a centralized source of
information.
• The market provides some evidence that taxpayers will pay for
insurance against risk that positions are incorrect.5
instance, an automated law system might produce the legal determination that
a party is compliant with a regulatory requirement.7 Automated law systems
produce wage and hour records,8 tax returns,9 environmental reports.10 They
also produce responses to copyright-based takedown requests.11 These
compliance cases are the focus of the analysis. They are a subset of
automated law systems. Automated law also includes private law examples12
(which are beyond this paper’s scope) and government-generated automated
law, which is addressed in Part IV.13
There are several reasons for error. One is that humans design and build
automated law, and people make mistakes. Another reason is that automated
14
E.g. Maayan Perel & Niva Elkin-Koren, Accountability in Algorithmic Copyright
Enforcement, 19 Stan. Tech. L. Rev. 472, 477, 488-91 (2016) (explaining that “platforms[]
such as Google, Facebook, and Twitter … appl[y] various algorithms to perform qualitative
determinations, including the discretion-based assessments of copyright infringement and
fair use” in order to respond to robot-generated takedown requests by copyright owners and
suggesting that this results in over-enforcement of copyright rights).
15
See Benjamin Alarie, Anthony Niblett and Albert Yoon, , Using Machine Learning
to Predict Outcomes in Tax Law (October 16, 2016), available at
https://ssrn.com/abstract=2855977 or http://dx.doi.org/10.2139/ssrn.2855977 (describing
AI technique applied to database consisting of the text of hundreds of cases to give answer
re: whether worker is an employee or an independent contractor for tax purposes).
16
The use of distributed ledger or blockchain technology, which also supports the
bitcoin currency, has been proposed for use by different computers in several legal capacities.
For instance, computers in different jurisdictions might agree on the status of an
import/export transaction. See Richard T. Ainsworth and Musaad Alwohaibi, Blockchain,
Bitcoin, and VAT in the GCC: The Missing Trader Example (2017 working paper)
(describing blockchain-based information confirmation system proposed in for new VAT
system in Middle East Gulf Cooperation Council trading bloc). Blockchain technology
might confirm and effect international payments. See Marcel T. Rosner & Andrew Kang,
Note, Understanding and Regulating Twenty-First Century Payment Systems: The Ripple
Case Study, 114 Mich. L. Rev. 649, 651 (2016) (suggesting that the Federal Reserve would
have an interest in this regulatory solution). A proposed system based on blockchain has
been built to reduce the cost of administering so-called know-your-customer regulations
relevant to anti-money laundering and anti-tax evasion laws. See Jose Parra-Moyano & Omri
Ross, KYC Optimization Using Distributed Ledger Technology (avail on SSRN 2017 paper).
See generally Carla L. Reyes, Conceptualizing Cryptolaw, 96 Neb. L. Rev. __ (2017).
17
See Mock & Shurtz at 463 (describing TurboTax errors in 1994 and 1996). See also
Choe v. Commissioner, T.C. Summ. Op. 2008-90, 2008 WL 2852249, at *1 (July 24,
2008); Rev. Rul. 85-187 (each involving erroneous software depreciation calculations).
18
Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1256
(2008) (describing a state government automated law system that incorrectly denied benefits
to eligible welfare recipient).
19
Kenneth A. Bamberger, Technologies of Compliance: Risk and Regulation in a
Digital Age, 88 Tex. L. Rev. 669, __ (2010) (describing private automated law systems that
failed to recognize risks to bank capital reported leading into global financial crisis).
8-Nov-17] AUTOMATED LAW LIABILITY 8
The tension between the architecture of computer systems and the goals
of law or democracy has been explored before. It is at the core of Larry
Lessig’s work on cyberspace.25 More specific shortcomings of particular
centralized, automated systems have also been explored.26 Often the
20
See Anthony J. Casey & Anthony Niblett, The Death of Rules and Standards, [2015]
draft at 3 (giving driving example to illustrate how technology might generate
“microdirectives”).
21
See, e.g., O’Connor v. Uber Techs., Inc., 82 F. Supp. 3d 1133 (N.D. Cal. 2015)
(allowing trial to proceed on classification issue); Cotter v. Lyft, Inc., 60 F. Supp. 3d 1067
(N.D. Cal. 2015) (same). See generally Shuyi Oei & Diane Ring, Can Sharing Be Taxed?
93 Wash. U. L. Rev. 989 (2016) (considering regulatory issues presented by the sharing
economy).
22
Tim Wu, When Code Isn’t Law, 89 Va. L. Rev. 679, 707-08 (2003). Wu describes a
“code designer act[ing] like a tax lawyer … look[ing] for loopholes or ambiguities in the
operation of the law. Id. at 708.
23
See Jay A. Soled & Kathleen DeLaney Thomas, Regulating Tax Return Preparation,
58 B.C. L. Rev. 151, 200-01 (2017) (recommending prohibition of the “prepayment-position
status bar”).
24
See Marcos Pertierra, Sarah Lawsky, Erik Hemberg and Una-May O’Reilly, Towards
Formalizing Statute Law as Default Logic through Automatic Semantic Parsing (2017
working paper).
25
See generally Lawrence Lessig, Code and Other Laws of Cyberspace (1999) (arguing
that democratic mechanisms should oversee and edit the “architecture” of cyberspace).
26
See, e.g., Kenneth A. Bamberger, Technologies of Compliance: Risk and Regulation
in a Digital Age, 88 Tex. L. Rev. 669, 729-30 (2010) (recommending “dynamic model of
regulation” to improve private automated law systems); Danielle Keats Citron,
Technological Due Process, 85 Wash. U. L. Rev. 1249, 1256 (2008) (considering “a
reconceived Mathews test [that] might permit hearings on flaws in [government] software,
8-Nov-17] AUTOMATED LAW LIABILITY 9
Recall the problem is that the errors caused by an automated law system
have a tendency to fall neither on the user nor on the system, but rather on
the public who is supposed to be protected by the legal scheme. This is the
same as a problem raised by non-automated advisors who offer aggressive
advice. But it is more susceptible of a solution, because of the centralized
feature of an automated law system.29 The goal of making the system directly
liable for errors is to solve an underdetection problem and shift the burden of
errors away from third parties and onto systems and users, since systems and
users presumably both benefit from errors (e.g., sharing the benefit of a lower
tax bill) and are well-positioned to avoid them.
and recommending that agencies test and allow public comment on automated law
software”).
27
See, e.g., Rory Van Loo, Rise of the Digital Regulator, 66 Duke L. J. 1267, 1327-28
(recommending oversight by an interdisciplinary “technology meta-agency”).
28
See,e.g., Henry Smith, Fusing the Equitable Function in Private Law (forthcoming in
Private Law in the 21st Century (Kit Barker, Karen Fairweather, and Ross Grantham eds.)
Harvard Public Law Working Paper No. 16-27 (arguing that errors are an inevitable feature
of law and that equity can be understood as a mechanism to correct them).
29
Other have recognized the legal design opportunity presented by centralized machine
gatekeepers. Cf. Susan Klein & Crystal Flinn, Social Media Compliance Programs and the
War Against Terrorism, 8 Harv. Nat’l Sec. L. J. 53, 57 (2017) (recommending “criminalizing
the failure of social media programs to institute policies that discover [and report] terrorism-
related posts”); Yesha Yadav, The Failure of Liability in Modern Markets, 102 Va. L. Rev.
1031, 1039-40 (considering strict liability and other regimes for harms generated by high-
frequency algorithmic trading).
8-Nov-17] AUTOMATED LAW LIABILITY 10
30
The automated law system, in other words, would have the right of subrogation. As
the party solely liable for legal error, it could step into the shoes of the taxpayer to litigate
the question of legal error in the taxpayer’s case. One issue this raises is a possible conflict
of interest between the taxpayer and the system with respect to whether liability proceeded
from an error of fact or an error of law. As to this concern, see … Another issue is whether
the system may force the taxpayer to participate in a lawsuit. As to this concern, one solution
is that the taxpayer might be able to block the
8-Nov-17] AUTOMATED LAW LIABILITY 11
Another issue is that the taxpayer might not be satisfied with the tax
software’s resolution of an issue with the government. For instance, what if
an issue is in a process of transformative legal change? What if a tax software
firm prepares a return that allows a registered domestic partnership in a
community property state to income split,31 and the IRS refuses to accept the
position, and the tax software firm does not contest it? Under the proposal
here, the RDP taxpayers cannot themselves contest the decision for that tax
year after their tax software firm settles it. But they could decline to use this
firm in other years, use another firm or self-prepare, and insist again on filing
under an income-splitting position.
It is also possible that the government would not be satisfied with the
resolution of an issue. Can it have a second chance to litigate an issue or
pursue a similar issue through the IRS controversy process? Given the
desirability of having several courts consider a problem, it would be wise to
leave the door open. One way to do this is to hold the automated law firm
liable not for all similar errors estimated over all returns ever filed, but rather
for returns filed in the same accounting period. Another controversy might
develop for another year.
31
Poe v Seaborn, Pat Cain’s blog
8-Nov-17] AUTOMATED LAW LIABILITY 12
Is it acceptable for Taxpayer A’s case to set the system’s liability for
errors in the returns of all Taxpayers B? Does this deprive Taxpayer B of an
important opportunity to argue that her tax return was correct as submitted?
One response is that Taxpayer B has entered into a contract with the
automated and/or centralized law provider. Under the contract, Taxpayer B
allows the system to adjudicate legal errors using other taxpayers’ facts. For
instance, Taxpayer B has implicitly agreed that the cost of the software may
increase because of a determination in Taxpayer A’s case.
Under the proposal here, if a tax preparation system had prepared the Pau
return, and had allowed deductions on a maximum principal amount of $1.1
million, the system would be liable for the total estimated cost of those errors,
not only in the taxpayer’s case, but also in all cases. As a result, the cost of
the software might increase to cover the penalties, and also future versions of
the software would presumably feature the lower purchase money mortgage
principal limit.
Faced with this response by the tax software firm, a taxpayer with a $1.1
million purchase money mortgage has several choices. She can continue to
use the software and accept the mortgage interest deduction haircut. But she
need not do so. That is, she need not permanently sacrifice the right to
32
Lawsky [others]
33
Pau
8-Nov-17] AUTOMATED LAW LIABILITY 13
directly defend her legal claim. She can switch to another method of tax
preparation that takes the opposite view on the legal question. Or, she can
self-prepare. 34 In this example, the taxpayer who deducted interest on a $1.1
million purchase money mortgage would eventually have prevailed, as the
government changed its view several years after Pau.35
The full cost of the negative externality will not be assigned to the maker
of an automated law system unless the approach solves the problem of
underdetection. The appropriate tool is a damages multiplier. Penalties
should increase according to a damages multiplier designed to account for
the error costs incurred across the system, not just for the user whose specific
case is discovered.
There are a number of issues with damages multipliers. But these issues
34
The possibility of a user overriding the system could also be added here. This raises
the question of whether software would attempt to shift liability to users by providing them
with choices that required them to reach legal decisions. If the software in fact requires
users to make such decisions, the system might avoid liability. But it would also sacrifice a
central selling point, which is removing the burden of understanding the law from the
shoulders of its users. Efforts on the part of an automated law system to have the best of
both worlds, for instance through fine print that purports to leave the user with the legal
decision but in fact does not engage the user in the decision, should be treated as legal
decisions made by the system. There is a parallel with some courts’ refusal to enforce
arbitration provisions buried too deeply in the fine print of an online contract. To
determine whether the software or the user makes a relevant legal decision, a court might
ask “whether the circumstances support the assumption” that the purchaser made an
independent legal decision rather than accepting the system’s default answer. Cf. Sgouros
v. TransUnion Corp., 817 F.3d 1029, 1034-35 (7th Cir. 2016) (asking in the context of
online contract terms “whether the circumstances support the assumption that the purchaser
receives reasonable notice of [the terms and conditions of the agreement]”).
35
See Rev. Rul. 2010-25, 2010-44 I.R.B. 571.
36
See Gary Becker, Crime and Punishment: An Economic Approach, 76 J. Pol. & Econ
169 (1968); see, e.g., Michael G. Allingham & Agnar Sandmo, Income Tax Evasion: A
Theoretical Analysis, 1 J. Pub. Econ. 323 (1972).
8-Nov-17] AUTOMATED LAW LIABILITY 14
are less problematic for automated law systems. One challenge is that
political and rule of law proportionality constraints limit the ability to vastly
increase penalties imposed on a single person based on the idea that her
transgression was difficult to detect.37 A second consideration is that a fixed
damages multiplier across different offenses fails to account for the variation
in probability of detection and in particular for the likelihood that more
serious offenses are more likely to be detected.38 A third issue is that factors
other than the cost of compliance influence the magnitude of penalties. These
include aggressiveness, culpability and intent.39 They also include whether
the defendant has deep enough pockets to pay the larger penalty.
First, the imposition of the penalty on the centralized system, not the
individual violation, reframes the issue of ensuring that the punishment fits
the crime. The idea is that the centralized system itself has the responsibility
to correctly state the law or pay appropriate damages. The individual user’s
penalty is only the starting point for measuring the system’s total error.
37
See, e.g., Michael J. Graetz & Louis L. Wilde, The Economics of Tax Compliance:.
Fact and Fantasy, 38 Nat’l Tax J. 355, 358 (1985) (“That an economic model analyzing the
expected utility calculation of a would-be tax evader recommends large increases in the
applicable sanction in light of the very low probability of its application quickly becomes
irrelevant as a policy matter. In this country, at least, legal, moral and political constraints
make this necessarily so. Coherence in our criminal law generally demands that ‘punishment
fit the crime’….”).
38
See Richard Craswell, Deterrence and Damages: The Multiplier Principle and its
Alternatives, 97 Mich. L. Rev. 2185, 2192 (1999).
39
See, e.g., Alex Raskolnikov, Six Degrees of Graduation: Law and Economics of
Variable Sanctions, 43 Fla. State L. Rev. 1015 (2016).
8-Nov-17] AUTOMATED LAW LIABILITY 15
multiplier.
Third, the penalty itself (aside from the multiplier) imposed on automated
law systems could be set without reference to aggressiveness or culpability.42
The automated law liability idea does not mean to use damages as a message
that the system wronged or hurt someone. It is not meant to act as a corrective
justice tool. It is more like a “public mechanism of accident regulation.”43
Since the goal of the liability regime is to force automated law systems to
internalize the costs of legal error, it should be sufficient to set the penalties
equal to the cost of legal error without, for instance, an upward adjustment
for culpability. Admittedly, this is easier to figure for some automated law
systems as opposed to others. The cost of underpaid taxes equals the tax
shortfall.44 In contrast, the cost of environmental noncompliance may be
more difficult to calculate.
The idea of a damages multiplier that a system can rebut with adequate
evidence of errors on other filings raises another problem of confidentiality
40
One example of a torts case in which a damages multiplier may have been customized
is a case in which the Seventh Circuit upheld an award of punitive damages against a
defendant who operated a bedbug-infested 191-room hotel. Two hotel guests sued, and the
total damages award was $10,000 in compensatory damages plus $372,000 in punitive
damages -- $2000 for every room of the hotel. Mathias v. Accor Economy Lodging, Inc.,
347 F.3d 672, 678 (7th Cir. 2003).
41
The Golsen rule provides that the Tax Court follows the law in a taxpayer’s circuit of
residence. See Golsen v. Commissioner, 54 T.C. 713 (1957). The proper damages multiplier
in a circuit split situation might be designed to calculate the total cost of the legal error for
all tax returns filed for residents in the circuit that gave the pro-government answer. The
automated law system could bear the burden of supplying the information necessary to
determine its users’ residence.
42
violations corrective justice
43
Alex Stein, The Domain of Torts 117 Colum. L. Rev. 535, 594 (2017). A similar idea
is of “licensing-based liability,” distinct from “liability imposed on the basis of wrongdoing.”
See John C.P. Goldberg & Benjamin C. Zipursky, The Strict Liability in Fault and the Fault
in Strict Liability, 85 Fordham L. Rev. 743, 745 (2016). The proposal of automated law
liability here stretches beyond the domains of inherently dangerous activities and the like in
which common law tort imposes licensing-based liability. See id. at 784.
44
Although the appropriate discount rate might be controversial. [Check: Remedies.]
8-Nov-17] AUTOMATED LAW LIABILITY 16
and privacy. This issue relates to the confidentiality and privacy of the other
taxpayers – not those that have been directly audited, but whose information
is relevant to the calculation of total damages. At least initially, the solution
proposed here allocates that problem to the automated law system. It is well-
positioned to determine the extent to which it should require its users to
participate in a test case damages calculation (in which case the users would
presumably have to agree to this involvement in a contract with the company)
and the extent to which it might, for instance, attempt to prove lower damages
using anonymized data or statistical techniques.
Both errors of law and errors of fact may occur in timekeeping software
systems. For instance, the software may prompt employers to enter
scheduled break and/or meal times for employees, and then automatically
deduct that time from paid time. This connects with a legal error if some state
laws do not allow break and/or meal times to be deducted from paid time. It
connects with a factual error if an employer enters the wrong information.
An employer’s incorrect data entry usually would seem to be the employer’s
fault, not the system’s fault. But even in this case one can find mixed
questions of fact and law. What if the system makes it very hard to change
entered time if it turns out that an employee works through a break? It is
conceivable that this could be cast as an error of law, perhaps as a legal error
45
The descriptions of errors and features of electronic timekeeping systems is based on
a qualitiative empirical examination of thirteen such systems. See Elizabeth Tippett,
Charlotte S. Alexander and Zev J. Eigen, When Timekeeping Software Undermines
Compliance, 19 Yale J. L. & Tech. 1 (2017).
46
29 USC § 211(c).
8-Nov-17] AUTOMATED LAW LIABILITY 17
because the design of the system so strongly suggests that scheduled time
worked, not actual time worked, is the relevant input.
When errors are errors of law, the mistakes may or may not be clear. Let
us assume that state law unambiguously states that break and/or meal times
count toward paid time. The software’s mistake on this front would be a clear
legal error. But timekeeping software also interacts with grey areas of law.
Consider the software’s interpretation of a time rounding rule. FLSA
regulations accept the practice of rounding “starting and stopping time … to
the nearest quarter of an hour” so long as it does not cause “a failure to
compensate the employees properly for all the time they have actually
worked.”47 Timekeeping software apparently implements this guidance with
a default setting that rounds time to the hour if a punch-in or punch-out time
is within seven minutes of an hour.48 But if employer rules effectively prevent
tardiness, so that employees are sometimes early, but never late, then the
software’s rounding default may systematically reduce the time recorded for
an employee. In this case, the software’s default rounding rule encourages
an employer to take an aggressive, but not clearly illegal, filing position.
The errors most obviously attributable to the system are clear errors of
law. If the system gets state law wrong, that is on the system.
Somewhat more difficult are errors of law that are not clear. In the
parlance of the tax law, these might be positions as to which there is a
substantial authority “reporting position,” so that a filer would not face
“substantial understatement” penalties as a result of the filing. The position
may be an incorrect statement of the law, though this is not obvious before it
is tested in an audit and perhaps a litigation process.
The liability of systems for legal errors should be strict. That is, liability
should not be limited to a negligent clear error of law, like the failure to
research wage and hour law in a particular state. Instead, it should include
liability for the close case that happens to come out in favor of the
government and to the detriment of all the taxpayers who took the position.
If a court invalidates the practice of rounding time according to the 7-minute
rule, the automated law system should bear that liability even though it was
not clear when the return was filed that the 7-minute rounding rule was
illegal.
47
29 C.F.R. § 785.48.
48
See Tippett, Alexander & Eigen at 37 (“A common unit of rounding appears to be
seven minutes.”)
8-Nov-17] AUTOMATED LAW LIABILITY 18
The reason for strict liability goes to the heart of the proposed direct
liability idea for automated law systems. The idea is that legal questions will
be priced, and then debated and decided. The interesting questions, those in
need of development, are the close ones. This centralized mechanism of
discovering and discussing these questions will be of little use unless it covers
the matters that will fuel the development of the law. Also, the system
controls and makes these legal decisions as much as it makes the decisions
that involve clearer legal error. It is still the least cost avoider.
The system should also have direct liability for errors arising from mixed
questions of law and fact. The capacity of software programs to manipulate
or influence human users’ responses through their design is well
established.49 If a system’s nudge influences a data input, the centralized
system’s design is at least partly responsible for the data input. Including the
mixed question of fact and law in the direct liability space empowers the law
might encourage the system to stop nudging users toward noncompliance by
suggesting an inappropriate legal framework for the relevant facts.
49
DeLaney Thomas / Soled.
50
See, e.g., Steven Shavell, Strict Liability Versus Negligence, [2] J. Leg. Stud. 1, 3
(198_) (explaining that strict liability is appropriate for cases of “accidents between sellers
and strangers” because if sellers are forced to pay for harm to strangers, market forces will
adjust the prices charged to customers until the outcome is efficient”).
8-Nov-17] AUTOMATED LAW LIABILITY 19
Success means a bureaucratic exercise that shifts costs of error until they fall
on the right party. This is consistent with strict liability.51
Another key is that even though a user and a system are in a contractual
51
See, e.g., John C.P. Goldberg & Benjamin C. Zipursky, The Strict Liability in Fault
and the Fault in Strict Liability, 85 Fordham L. Rev. 743, 745 (2016) (contrasting licensing-
based liability and wrongs-based liability); Alex Stein, The Domain of Torts 117 Colum. L.
Rev. 535, 594 (2017) (contrasting “public mechanism of accident regulation” and wrongs-
based torts laws).
52
6694
53
Need to work out: If tax preparer law remains as is for TPs that do not use automated
law systems like TurboTax, is the effect of this to drive taxpayers back into private preparers,
so no effect of automated law liability but once again way too expensive to prepare? Is a
workaround a different idea for tax preparer law, e.g. if an automated law system, not a
preparer? Should Big 4 software count as automated law?
8-Nov-17] AUTOMATED LAW LIABILITY 20
relationship with each other, they are not in a lawyer-client relationship. Just
as a legal self-help book does not create an attorney-client relationship, so too
the use of a software program does not create such a relationship. The
relationship is more similar to the relationship between a withholding agent
and a payee. Automated law systems are more like the banks that determine
the character and amount of income and basis, and like the employers who
identify and quantify compensation. These banks and employees have strict
liability for errors for certain failures to withhold. Likewise, the idea of this
automated law liability proposal here is to cause automated law systems to
balance, on one hand, the importance of compliance and, on the other hand,
the benefit of different reporting positions to their clients.
The core of the idea presented here is to engage automated law systems
and the government in a dialogue about what the law should be, and to ensure
that if the law is not as first reported, the costs of the error fall on the right
shoulders, which is to say on the users and the system rather than on the
public in general. This analysis applies equally for clear errors and close
calls. Another way to look at this is that someone needs to bear the burden
for close-call errors of law. As between the system and user on one hand and
the public on the other, the right party is the contracting unit of the system
and the user. As between the system, which makes the decision, and the user,
who accepts the system’s decision, the right party is the system.
A. Market Differentiation
contrast, a CleanTax customer will have a higher tax bill, but the product will
cost less because of the lower cost of insuring against automated law
liability.54
The idea that a competitive market among automated law products will
allow the pricing of error risk relies on the idea that automated law providers
will respond to the incentive to provide regulated parties with aggressive,
54
There could also be differentiation within a software product if an automated law
provider charged different amounts of insurance based on different positions. An “audit
insurance cost” bar, like the “refund due” bar might show a taxpayer how a decrease in tax
liability related to an increase in audit insurance cost.
55
Cf. Guido Calabresi, The Costs of Accidents: A Legal and Economic Analysis 50-54
(1970) (identifying deep pockets as a possible reason supporting enterprise liability).
56
This centralization of liability and insurance is the opposite of the prediction of peer-
to-peer insurance and “radical financial disintermediation” suggested elsewhere. See
Michael Abramowicz, Cryptoinsurance, 50 Wake Forest L. Rev. 671, 673 (2015).
57
Too big to fail.
8-Nov-17] AUTOMATED LAW LIABILITY 22
cost-saving answers. The best result under this system is one in which there
are different choices available and some offer more aggressive positions than
others. Then, the market would participate in a helpful dialogue about the
content of the law. The risk and total cost of error would be diversified across
many returns under the umbrella of an automated law system, and then priced
in. This is the picture of market differentiation presented in Part III.A.
There are reasons why such a competitive market might not exist. One
possibility is that the market might be occupied by one dominant player. Or,
there might be collusion among players. Or, regulated parties might not be
able to see and process information about the offsetting costs and benefits of,
for instance, a lower tax bill but a higher price due to a higher cost of
insurance.
One possible outcome is that automated law systems might lean too
strongly in favor of the government, because this approach reduces their risk.
If a system can charge the same amount to a taxpayer whether or not it
produces pro-taxpayer positions in grey areas, then under an automated law
liability regime it will avoid these pro-taxpayer positions, because they cost
more to insure. Alternatively, the automated law system might not be created
to begin with, because of concerns about the cost of automated law liability.58
Yet it is not the goal of this proposal to stamp out pro-taxpayer reporting
positions. The goal is to encourage the risk of these positions to be priced
and their costs to be properly allocated. The goal is to debate and resolve
controversies, not to eliminate them by resolving every close question in
favor of the government.
59
Citron
60
Cf, e.g., Joshua D. Blank & Leigh Osofsky, Simplexity: Plain Language and the Tax
Law, 66 Emory L.J. 189 (2017) (describing IRS guidance that outlines conservative or safe
harbor guidance).
61
Much here to cite, e.g. Ventry, others. The availability of private enforcement of
public rights varies by subject matter and over time. One view is that it has been
systematically cut back over half-century between 1964 and 2014. See Stephen B. Burbank
& Sean Farhang, Rights and Retrenchment (2017).
8-Nov-17] AUTOMATED LAW LIABILITY 24
CONCLUSION