Four Futures of Legal Automation

Introduction

Will software substitute for lawyers, or increase their earning power? There will be evidence of each in coming decades: Routine work will continue to be automated, while new opportunities will also emerge. The critical question is which trend will be dominant, and what its effect will be.

Scholars have addressed the automation of legal processes since at least the 1960s.[1] None foresaw all the critical developments of the past two decades and detailed prognostication is still a fool’s errand. Nevertheless, in a time of rapid technological change, scenario analysis can help clarify the possibilities ahead. This Essay describes four possible future climates for the development of legal automation, ranging from a computationally administered “society of control” to a muddling continuation of the status quo.

The future of law and computation is more open ended than most commentators suggest. By mechanically extrapolating present trends in document review into the future, for example, one might expect replacement of lawyers en masse by software. Yet two leading experts on automation say that computerization of legal research will complement the work of many lawyers, rather than substitute for them.[2] They categorize the careers of attorneys as having a “low risk” of computerization, at least compared with employment generally.[3]

This is not to say that law practice has reached some steady state of balance between human capital and software. Rather, the agenda for researchers must shift toward direct examination of law’s diverse practice areas and functions. This Essay lays out a research agenda for better-grounded predictions about the future course of automation, in areas ranging from business formation and mergers and acquisitions, to compliance, to discovery and fact investigation, to litigation, legislation, and regulation.

While we extend and develop extant debates on the degree of automatability of legal tasks, we also acknowledge the sociological and political nature of the discussion. Extralegal developments will be crucial in determining the future balance of computational and human intelligence in the law. No profession is an island, untouched by the trends in power, wealth, influence and status prevailing in the society in which it is embedded.[4] During the New Deal and Great Society, the importance of lawyers rose as they articulated the reach and limits of new social and economic rights. The Affordable Care Act and Dodd-Frank Act could presage a similar rise in the value of lawyers’ services.

There are, however, countervailing social forces. Government employment has declined dramatically during the Obama administration, particularly in state and local offices.[5] Even with their many new powers, regulators are hard-pressed to increase enforcement intensity without added resources. A judiciary often hostile to regulatory action can slow down or stop major initiatives.[6] Most importantly, when neoliberal corporations and individuals become wealthy enough, they are able to shape a climate of opinion that tends toward the marginalization and even trivialization of the type of legal work traditionally considered essential to the fair and efficient working of markets, public programs, and society in general. Some vanguardist technologists even dismiss law as an outdated app ripe to be replaced by a combination of markets, reputational intermediaries, blockchains, and distributed autonomous organizations.[7]

To predict the future of legal automation, we take key considerations internal and external to the legal profession as fundamental variables. Different types of legal work are more or less susceptible to automation. Society can be more or less regulatory and more or less open to procedural protections. A basic schematic emerges:

 

Low Intensity of Legal Regulation High Intensity of Legal Regulation
High Susceptibility to Automation 1. Vestigial Legal Profession 2. Society of Control
Low Susceptibility to Automation 3. Status Quo 4. Second Great Compression

 

We use this schematic as a tool for thinking and as a way of organizing future scenarios.[8] Abstract trends like automation and regulation can have very concrete consequences, as our description of the numbered scenarios above will demonstrate.

The first scenario, a Vestigial Legal Profession, can be expected in legal practice areas now serving industries that continue to deregulate. For advocates of disruptive innovation like Harvard Business School Professor Clayton Christensen, that is a consummation devoutly to be wished. Christensen’s acolytes in the legal academy tend to see much of law as little more than a transaction cost imposed on job-creating businesses. From their perspective, automation both reflects and reinforces trends toward laissez-faire deregulation. Simple, precise legal rules are easy to automate. As attorneys’ roles are increasingly taken over by machines, their social prestige declines—thus vitiating their ability to propose more complex or expansive regulatory regimes.

But what happens if artificial intelligence and regulation both advance? This scenario portends what French social theorist Gilles Deleuze called a “Society of Control;”[9] namely, a world in which human action is increasingly managed and monitored by machines.[10] As Peter Reinhardt recently observed, at firms like Uber and 99designs, “lines of code directly control real humans.”[11] In government, too, software can effectively make determinations about who will be audited, who will receive benefits, or who will be denied access to a flight.[12] It is possible to imagine whole areas of law relegated to computational implementation. For example, Lawrence Solum has posited (not endorsed) the development of an “Artificially Intelligent Traffic Authority (AITA),” which could “adapt itself to changes in driver behavior and traffic flow.”[13] The system would be designed to “introduce random variations and run controlled experiments to evaluate the effects of various combinations on traffic pattern.”[14] But the system would not be very forgiving of individual experimentation with, say, violating its rules. Rather, as imagined by Solum, “[v]iolations would be detected by an elaborate system of electronic surveillance” and offenders would be “identified and immediately . . . removed from traffic by a system of cranes located at key intersections.”[15]

Solum uses this example to break down the usual distinctions between human and artificial meaning in the law, rather than as a policy proposal for the future of transportation. The scenario is just as useful to flag the inevitable legal and political aspects of automated law enforcement, even in an area as seemingly technical as traffic. Would the cranes posited in Solum’s hypothetical surgically remove protesters, like the Ferguson marchers, who blocked highways?[16] Would anyone with an expired license or tags be plucked away as well—in a vision already half-realized by subprime lenders who stop cars remotely as soon as a payment is late?[17]

Both the Vestigial Legal Profession and Society of Control scenarios may seem unduly futuristic—and indeed warrant skepticism. As the third scenario—Status Quo—suggests, it is entirely possible that legal automation will move forward far more slowly than many predict or expect. While the legal profession may decline in importance (if not in employment levels), it may not be nearly as susceptible to automation as other fields.

By contrast, robust growth in jobs for those with legal training would likely occur under a fourth scenario, called the “Second Great Compression.” Among economists, the Great Compression is the period from roughly 1947 to 1979 when income growth was roughly evenly distributed among quintiles.[18] Since 1979, most income gains have gone to the top quintile, and within that group, the top 1 percent (and within that group, the top 0.1 percent).[19] Reversing that trend toward concentration of income would take very high levels of legal regulation of enterprises, and a rebalancing of the relative power of the state and business to favor the enhanced autonomy of the former. Each trend in the Second Great Compression scenario would increase the power (and, likely, the earnings) of attorneys.

By describing these trends in greater detail below, this Essay illuminates the relative plausibility of each scenario. It takes seriously the possibility of both self-fulfilling and self-preventing prophecies. Both Status Quo and Second Great Compression are likely to be more humane scenarios than Vestigial Legal Profession and Society of Control. This work is designed to make it more difficult for key policymakers to accept either of those high-automation scenarios uncritically. And if these substandard scenarios do indeed come to pass, at least the profession was warned in advance.

I. Vestigial Legal Profession Scenario: High Automation, Low Regulation

The most prominent advocates for legal automation are, at present, closely tied to deregulatory or laissez-faire views.[20] John O. McGinnis believes that machine intelligence will substitute for legal expertise, thereby reducing the incomes of many lawyers.[21] One of the main reasons he applauds this development is because, in his words:

A decline in the clout of law schools and lawyers could have potentially broader political effects. For the last half-century, many law professors and lawyers have pressed for more government intervention in the economy. This isn’t surprising. Lawyers in the modern regulatory state reap rewards from big government because their expertise is needed to understand and comply with (or exploit) complicated and ever-changing rules. In contrast, the entrepreneurs and innovators driving our computational revolution benefit more from a stable regulatory regime and limited government. As they replace lawyers in influence, they’re likely to shape a politics more friendly to markets and less so to regulation.[22]

 

For McGinnis, there is a zero-sum relationship between the clout of innovators and lawyers: as one rises, the other falls.[23] The political views of each are also easy to map: the technology crowd is libertarian, in favor of limited government, while attorneys err on the side of statism, harboring both ideological and material biases toward expanding government power.

Admittedly, McGinnis’s assumptions here may be problematic. Many of the most powerful and well-paid attorneys in the United States operate practices that are profoundly deregulatory.[24] On the technology side, it is by no means clear that its vanguard firms—mainly located in Northern California—are filled with libertarians. Google employees’ political donations have skewed strongly Democratic, while tech workers in general do not hew to a single political line.[25]

A. Technologies in High-Automation/Low-Regulation Legal Fields

Still, for the sake of argument, consider a plausible clarification of McGinnis’s views: Those at the top of technology firms may well favor deregulation, because cuts to their legal costs may directly enrich them or their shareholders. And the bulk of lawyers, toiling below the level of partnerships at white-shoe law firms, have some interest in maintaining the structures of legislation and regulation that are their raison d’etre. How might venture capitalists and technologists eventually fund and develop the tools needed to replace lawyers? Many technologies have already been developed in high-automation/low-regulation areas and have had both negative and positive impacts on the legal community.

1. eDiscovery

At least in the field of discovery, some tools such as Relativity, HP Autonomy, Merrill, and Stratify have already been developed. Fewer young associates pore over boxes of documents to find mentions of a query term anymore. EDiscovery reigns instead. According to the Sedona Conference, eDiscovery is “the process of identifying, preserving, collecting, preparing, reviewing, and producing electronically stored information . . . .”[26] In predictive coding, a document reviewer assigns metrics regarding how useful documents and key terms are to a case.[27] Based on these inputs, predictive coding software locates other documents within a database that would also likely be useful as evidence in a case.[28] Predictive coding, therefore, decreases the number of documents that need to be reviewed manually and can decrease time spent in discovery by 75 percent.[29]

2. eResearch and Form Providers

Similarly, the days of manually “Shephardizing” a case are gone: Citing documents appear in online research tools like Westlaw or Lexis, or might even be found via a clever Google search. Apps can now automate basic wills, incorporation documents, or expungements. LegalZoom has been able to create legal “form-like” documents for quite some time. Many LLCs have used artificial intelligence software for incorporating their businesses.[30] Consumer-facing providers of automated legal solutions have faced some lawsuits.[31] They may be able to provide help to those seeking simple assistance, but always need to be sensitive to myriad issues that can arise when problems become more complex.[32]

B. Impact on Law Firm Revenue

A Vestigial Legal Profession would reflect larger trends toward inequality in the economy. Those at the bottom of the profession would continue to be replaced by machines; those at the top would find software complementing their expertise and connections, extending their reach and power. Similarly, the wealthiest firms and persons would find their actions increasingly untouchable as various explicit and tacit rules bar legal action against them, or shunt it into fast-tracked, low stakes arbitral forums.[33] Meanwhile, even if citizens of low socioeconomic status are rendered more vulnerable by new legal orders, there will be little to win from them.

A Vestigial Legal Profession scenario would vindicate critics of the legal profession who assume that too much of law firms’ work is now routine and could be broken down into more efficient processes.[34] But assessing the impact on revenue from automated form provision requires more data, including, for example, the percentage of revenue these firms obtain from this work. Nevertheless, the hit to revenue might be substantial; the lack of hard data to prove a certain level of dependence on such work does not necessarily indicate the lack of such dependence.

Repetitive administrative functions such as document review and filling out forms are prime candidates for automation. Lawyers can farm out tasks, which the ABA rules permit as long as a lawyer “supervises” the delegation.[35] On the other hand, small firms may not be competitive because they end up doing a lot of this work themselves.[36] As the number of persons employed as attorneys under this scenario diminish, their clout would also fade, starting a self-reinforcing cycle of diminishing influence.

II. Society of Control Scenario: High Automation, High Regulation

McGinnis thus may well be right that a highly automated legal system will advance the deregulation of large corporations. Yet the automation of regulation and law enforcement need not go hand-in-hand with deregulation for small firms or citizens. On the contrary, just as ever-cheaper sensors and cameras lead to greater opportunities for surveillance, the internet of things and connected devices will also render ever more aspects of daily experience as pressure points for regulatory intervention.[37] The same technology that allows the government to direct deposit income tax returns come tax time may also allow direct debit of fines via trips switched by red light cameras.

The automation of law enforcement is already well documented in many fields. For example, copyright holders have installed multiple layers of content control into compact disks, files, platforms, and surveillance systems. Known as “digital rights management,” kludgy versions of this automation might render a laptop computer capable of only playing DVDs from Europe if its settings are changed too many times.[38] More sophisticated surveillance tools can detect patterns of sound and images owned by a copyright holder, and automatically disable their transmission. That happened during an awards show broadcasted online; when some notes of protected content were scanned by an algorithm, the program immediately cut off.[39] There are plans to automate terms of service agreements, so privacy protections effectively “run with the data” as embedded code, restricting some uses and permitting others.[40]

The state has also outsourced many regulatory and legal decisions to computation. There are too many tax returns for IRS personnel to examine by hand; “audit flags” must be programmed to determine which should get scrutiny, or be rejected outright.[41] Homeland security officials use big data and algorithms to determine who is a security risk, and who can pass unmolested to their flights.[42] Even the humble red light camera is part of the trend, alerting officials to scofflaws who illicitly cross intersections. Predictive policing deploys law enforcement resources before crimes are committed.[43] And once criminals are convicted, “evidence-based sentencing” may quantify punishment by using data and algorithms to adjust sentence length based on myriad factors.[44]

More avant-garde proposals would build enforcement regimes into contracts, on the model of “distributed autonomous organizations (DAOs).”[45] The use of Bitcoin has fueled hopes that “distributed trust networks” could replace traditional legal authorities.[46] With this model, the fundamental function of determining who owns or owes could be outsourced from persons to the blockchain, a record of transactions maintained and monitored by a large number of engaged observers.[47] On the governmental level, libertarian Hans-Hermann Hoppe envisioned the rise of government-like organizations (GLOs) to enforce agreements among private parties, with centralized administrations gradually falling into desuetude.[48]

To assure transparency and predictability, software code may ultimately govern such GLOs. Samir Chopra and Laurence White call these programs “autonomous artificial agents (AAAs)”—agents because they act on behalf of someone, artificial because they are not organic persons or animals, and autonomous because they can perform actions without checking back in with the person who programmed them or set them in motion.[49] Mutual (or unanimous) consent of those regulated (or governed) by the GLOs would be necessary to alter the code.

Though some Silicon Valley technology firms may eagerly await the advance of both public and private versions of the Society of Control, there are still some critical problems to be worked out. Classic values of administrative procedure, such as due process, are not easily coded into software language.[50] Many automated implementations of social welfare programs, ranging from state emergency assistance to Affordable Care Act exchanges, have resulted in erroneous denials of benefits, lengthy delays, and troubling outcomes.[51] Financial engineers may quantify risks in ever more precise ways for compliance purposes, but their models have also led to financial instability and even financial crisis.[52]

Given the extraordinary returns it has seen over the past few decades, the finance sector has been particularly well resourced to deploy automation of compliance and trading—with mixed results. Even when structured securities, parsed by proprietary software, proved good for the firms’ bottom line, they did not contribute to overall economic productivity.[53]

The most fully automated part of the finance sector—high-frequency trading—has generated considerable controversy.[54] Algorithmic trading can create extraordinary instability and frozen markets when split-second trading strategies interact in unexpected ways.[55] Consider, for instance, the flash crash of May 6, 2010, when the stock market lost hundreds of points in a matter of minutes.[56] In a report on the crash, the Commodity Futures Trading Commission (CFTC) and Securities and Exchange Commission (SEC) observed that “as liquidity completely evaporated . . . trades [were] executed at irrational prices as low as one penny or as high as $100,000.”[57] Traders had programmed split-second algorithmic strategies to gain a competitive edge, but soon found themselves in the position of a sorcerer’s apprentice, unable to control the technology they had developed.[58] Though prices returned to normal the same day, there is no guarantee future markets will be so lucky.

One of the leading films on artificial intelligence, Fast, Cheap, and Out of Control, presented the whimsy of a roboticist who designed bug-like robots. The film’s title suggested an almost inevitable consequence of many forms of automation: while they could make daily processes faster and cheaper than direct human control would allow, they also threaten to go out of control once a critical mass of them begins interacting in unpredictable ways. The paradox of a Society of Control, already foreshadowed by the flash crash of 2010, is that human beings have little clear sense of what rules or patterns of conduct will ultimately develop in highly automated environments rife with measures to generate rules and countermeasures to evade them. The most successful attorneys in this scenario will embrace and utilize automation in their practices while identifying problematic applications of artificial intelligence to law.

III. Status Quo Scenario: Low Automation, Low Regulation

Many lawyers spend significant amounts of time doing repetitive work, which is usually not the best or most efficient use of an attorney’s talents. That may well be taken care of by machines. The acceleration of automation beyond its present level, however, appears doubtful for many reasons. Combined with the general trend of deregulation now afoot in many industries and policy areas, stalled automation would result in our Status Quo scenario.

Why is a continuing, low-automation Status Quo a real possibility? It is easy to overestimate possibilities for technological advance. Consider, for instance, automated form provision and advice in the model of LegalZoom. This cookie-cutter, one-size-fits-all approach is dangerous: entirely incorrect forms for a client’s particular situation could be used or a resulting contract could be unenforceable. Although some LLCs are relying on LegalZoom to draft their legal documents, it can be excessively risky to use LegalZoom for high-stakes business deals. Risk aversion may trump technology diffusion.

A. eDiscovery Legal Costs.

The ultimate impact of eDiscovery remains unclear. While Moore’s Law may apply to computation, it does not govern social relations. Automated methods have already reduced demand for some legal positions, and may continue to do so. However, one advocate of legal automation also observes that continued automation of discovery could “increase profits to high-performing law firms and legal product companies engaged in the enterprise.”[59] This is in part because the extensive training requirements for software use and the cost of the predictive coding product itself allow top law firms to charge a premium for these services.[60]

Of course, there will be countervailing pressures from clients. A number of best practices for decreasing eDiscovery costs, which are usually determined by the number of gigabytes of data to be mined, have helped reduce costs.[61] Still, sometimes using these methods can be cost prohibitive and also push the boundary of relevancy.[62] Digital records increase the scope of discovery.[63] Furthermore, even if eDiscovery on balance shrinks the size of the legal sector, it could be cost prohibitive for some clients to provide information in a format necessary for automated eDiscovery, especially if a large amount of redactions are required, as is the case with trade secrets and protected information.[64] In particular, eDiscovery can require significant time during the planning stages, as parties must negotiate the scope, cost, and amount of materials that will be made available.[65] And if planning is not done correctly, large amounts of data can result in exorbitant eDiscovery costs, as vendors charge per gigabyte of data.[66] This vendor charging scheme largely suggests that even when doing eDiscovery without outside assistance, costs increase proportionally to the size of the electronic data pool used.[67] The Internet of Things’ ubiquitous sensor networks will be just one of many new sources of data poised to increase this burden.

Compounding the issue, Rule 26 of the Federal Rules of Civil Procedure requires parties to provide all “nonprivileged matter that is relevant to any party’s claim or defense.”[68] Given this expansive demand, eDiscovery can create new opportunities for sanctions or legal malpractice lawsuits.[69] For example, in In re Delta/AirTran Baggage Fee Antitrust Litigation,[70] sanctions were imposed because not all storage that was relevant to discovery was given to the plaintiff.[71] This illustrates failures that can occur when IT and legal departments don’t work closely together in eDiscovery matters. Furthermore, eDiscovery almost always requires documents already be stored electronically; otherwise, additional costs are associated with converting them into electronically stored information.[72] Because of the sheer amount of unwieldy electronic data produced by most businesses, there still are several impediments to radical expansion of automated eDiscovery.

B. The Problem of Close Cases

Artificial intelligence would work best for rule-based law in easy cases. Examples of easy cases include those in which damages can be easily calculated, in which there is precedent on all fours,[73] or in which the law is settled and there are no outstanding policy or legal questions. A breach of contract claim with damages clearly described in the contract or a typical rear-end collision with only body damage to a vehicle would most likely be easy cases. These easy cases occur often, but they are also typically settled quickly outside of court with minimal attorney effort, if an attorney is involved at all.[74]

One of the biggest issues with pervasively applying artificial intelligence to rule-based law would be with edge or corner cases, in which the conduct, on the surface, appears contrary to law.[75] These are the typical difficult cases, in which special facts differentiate the situation from settled law.[76] One example would be a case where the contract terms are ambiguous for the specific type of breach that occurred. Another example involves emergency conditions that might justify certain conduct, or an area of the law, such as cybersecurity, that is still growing and is largely unsettled.[77]

Conduct that highlights ambiguity in recently passed rule-based law gives rise to situations where artificial intelligence tools, like eResearch, could easily lead to incorrect results. These results would be exacerbated by software that was not updated when a law changed. Because law can change either from legislative or judicial action, automated software programs are likely to miss at least some updates. The problem becomes even more intractable when one considers the varying levels of authority and clarity inherent to the modern administrative state’s panoply of legislative rules, interpretive rules, guidances, adjudications, and policy statements. Legislatures cannot possibly think of every possible set of facts in which a law might be applied.[78] Even in areas of law where it appears that rule-based law is ironclad on the surface, it is often difficult to discern what law governs a scenario because of various exceptions that develop in the course of administration and enforcement, or in judge-made common law.[79] When one considers persuasive authority, like dicta and laws that other states have adopted, even more complexity is evident, and the need for human (and humane) judgment is imperative.[80] Furthermore, limitations in human language and issues with drafting conventions create loopholes that could provide clients with excellent defenses[81] Because software will likewise be written according to these same limitations, there is a significant possibility that software will not discover problems and opportunities that a prudent attorney would.[82]

This problem is exacerbated by limitations in software: (1) it cannot predict the infinite fact patterns that occur in difficult cases that are typically litigated; (2) because machine learning is largely based on pattern recognition, it is likely to provide the easy solution associated with a similar easy case, while unable to replicate common sense judgments regarding important loopholes and policy concerns that apply to more specific fact patterns associated with substantially more difficult cases; and (3) if non-attorney clients were to use the software, they would also likely not include important, key facts that might change the application of a rigid law in a particular circumstance. This limiting user interface, which would be the most likely artificial intelligence solution, could make the odds for the client even worse.[83] For example, a client might settle based on a software program’s advice even though contract terms were ambiguous or unconscionable, not knowing that a court would not have upheld them. Furthermore, someone without a legal background is unlikely to recognize that there was not a formal offer and acceptance in a contract and incorrectly answer the software program questions regarding offer and acceptance in the affirmative.[84]

It may be difficult for artificial intelligence to replicate a lawyer’s efforts to parse analogies.[85] The importance of each word or phrase to arguments and the lengthy, difficult language in statutes and court opinions defies easy translation into algorithms.[86]

Attorneys almost always search for analogies during standard legal research. By doing so, they can discover cases in which a court held a particular rule applied in situations that could be analogized to a client’s scenario, even when facts are not on all fours with precedent.[87] Natural language recognition software, like Siri and Watson, could guard against ambiguity by prompting specific questions that could be relevant to the outcome of the case. But, because fact patterns are so individualized and specific, it would be impossible to customize follow-up questions to account for all possible scenarios.[88] These hard cases, ones that do not occur regularly and are generally not predictable, are the usual cases in which clients now decide to consult with an attorney.[89] For easy cases that occur regularly, like a standard rear-end collision with no personal injury, people usually settle without attorneys.

Moreover, it is questionable whether many in the group that consulted an attorney for routine easy cases would change their behavior and decide not to consult an attorney if artificial intelligence were available. Ultimately, in order to figure out how artificial intelligence will impact the legal profession, it is necessary to determine what percentage of attorney revenue comes from routine easy cases where artificial intelligence would be most useful. It is important to compare easy case revenue with that from difficult, unsettled law. Given that easy cases should not require a significant amount of attorney time, it is expected that these cases should generate the lowest amount of money per case. Also, easy cases that settle quickly often do not involve as large sums of money as more difficult cases. In short: We should not presume that eDiscovery successes can be easily extrapolated to the myriad other tasks attorneys perform.

C. Inability to Emulate Human Creativity

Finally, artificial intelligence will struggle to emulate human creativity, which is subjective and hard to measure. One of the more viable areas of artificial intelligence that McGinnis focuses on is legal search, such as eResearch.[90] These natural language eResearch solutions will likely provide a first-order approximation that would be correct in many situations, but could lead to significant errors in many others.[91] Artificial intelligence is likely to also miss out on essential policy questions that could be controlling in a case—for instance, for certain administrability, efficiency, or fairness reasons, the rigid rule should not apply to a specific set of facts.[92] The fact that computers are unlikely to identify that a law’s application should be limited and that artificial intelligence has specific problems in skills requiring manipulation, creativity, and social intelligence highlight these issues.[93] Furthermore, while leading advocates of legal automation focus on legal search, discovery, document generation, and predicting case outcomes, they fail to address what percentage of overall attorney income is based on these activities.[94] Another important source of attorney revenue is associated with providing expert advice, investigating facts, organizing materials, and applying facts to law.[95] There is no clear computational replacement for many of these activities on the horizon—particularly in complex and fast-changing areas of law, legislation, and policy.

IV. Second Great Compression Scenario: Low Automation, High Regulation

What will be the future for the legal profession if the challenges to automation in our Status Quo scenario persist, and regulation increases, rather than decreases? Retarding automation that controls, stigmatizes, cheats innocent people, or sets up arms races with zero productive gains, should be a much bigger part of public discussions regarding the role of machines and software in ordering human affairs. If such discussions lead to new policy, society might see a fourth scenario, here described as the Second Great Compression.

The first great compression involved the reduction in inequality, between 1947 and 1979, as workers demanded more compensation for their labor, and capital received commensurately less of national income.[96] (This trend reversed from 1979 to the present, as capital became more dominant.)[97]

The question now is whether law will be turned toward more egalitarian ends. Much of the Dodd-Frank Act and the Affordable Care Act could be interpreted as a renewal of American society’s egalitarian aims.[98] Lawyers will need to be a large part of these efforts, parsing complex regulations on, inter alia, proprietary trading, risk adjustment in insurance markets, and the scope of professional practice in rapidly changing finance and health sectors. Policymakers may well decide that human judgment is critical to each of those tasks. And they may refocus the goal of automation to help stabilize and cheapen the supply of necessities, such as transportation, energy, food, and manufactured goods.

That possibility may strike some as excessively dirigiste. But each of the prior alternatives discussed above has its own elements of central planning, whether by private or government actors. All too often, the automation literature is focused on replacing humans, rather than respecting their hopes, duties, and aspirations. A central task of educators, managers, and business leaders should be finding ways to complement a workforce’s existing skills, rather than sweeping that workforce aside.[99] That does not simply mean creating workers with skill sets that better plug into the needs of machines. It also means doing the opposite: creating machines that better enhance and respect the abilities and needs of workers. That would be a “machine age” welcoming for all, rather than one calibrated to reflect and extend the power of machine owners.

As one of us has shown in prior work, a great deal of inequality can be explained by the dismantling or subversion of laws designed to maintain corporate responsibility and the obligations of all the wealthy to pay taxes due under law.[100] Ideological movements designed to shut the courthouse door to the injured have been funded exceedingly well by their corporate benefactors. A self-reinforcing cycle of corporate influence over legislation and its interpretation, leading to higher profits, leading to more resources to assert corporate influence, explains many of the economic difficulties faced by plaintiffs’ attorneys in fields like environmental law and consumer protection law. It also reduces the demand for defense-side attorneys.

A reversal of that cycle—reasserting environmental and other standards, winning real compensation for the victims of corporate misbehavior (and fees for their attorneys), could provide the resources for countervailing powers in both federal and state politics. It would also create more demand for legal services among many citizens who have now given up hope of obtaining compensation for wrongs they have suffered.

Of course, there are also visions of social justice achievable without a great deal of regulation and legal enforcement. A universal basic income might render many disability or welfare attorneys superfluous; a single-payer healthcare program might reduce the employability of health lawyers.[101] But political realists in both parties are quick to declare the extreme unlikelihood of either option. Given their power, routes to a more egalitarian social order in the United States may well continue to require the intensive labor of legal professionals.[102]

Conclusion

Simple legal jobs (such as document coding) are prime candidates for legal automation. More complex tasks cannot be easily routinized. So far, the debate on the likely scope and intensity of legal automation has focused on the degree to which legal tasks are simple or complex. Just as important to the future of the legal profession, however, is the degree of regulation or deregulation likely in the future.

Situations involving conflicting rights, unique fact patterns, and open-ended laws will likely remain excessively difficult to automate for an extended period of time. Deregulation may, however, effectively strip many persons of their rights and render once-hard cases simple. Consider, for instance, the trend in contract law to permit individuals to give up their right to join class actions, or even seek recourse in a court, via terms of service agreements that almost no consumer actually reads. A robot could dispose of nearly all cases arising in the wake of such agreements, if the only legal issue critical for the vast majority of consumers was whether they had “agreed.” Once the law and fact of consent in such situations are settled, the outcomes are entirely predictable.

On the other hand, disputes that now seem easy, because one party is so clearly correct as a matter of law, may be rendered hard to automate by new rules that give now-disadvantaged parties new rights. For example, a person in the United States cannot sue Google for automatically putting a 20-year-old bankruptcy action against him as the top result in a search for his name. In Europe, however, the opposite is the case: A newly recognized “right to be forgotten” (better named the “right to be delisted”) gives persons the chance to challenge the inclusion of certain irrelevant, damaging material from such results.[103] This decision realizes the basic principles behind expungement law in the digital age. It also creates new work for attorneys and policy advisors seeking to balance the public right to know against individual rights of privacy and reputational integrity.

Thus, legal and cultural change can render once contestable disputes essentially automatable, and can also render once automatically resolved disputes open to new levels of contestation. By explaining in general terms how each of these reversals could arise, this Essay combines technical and sociological analyses of four distinct climates for the future of legal automation. We hope the scenarios we have described have demonstrated that the future of law and computation hinges on broader social trends outside of law, and thus is far more open ended than most commentators now suggest.

[1]. Julius Stone, Legal System and Lawyers’ Reasonings 37 (1964).

[2]. Carl Benedikt Frey & Michael A. Osborne, Oxford Martin Sch., The Future of Employment: How Susceptible are Jobs to Computerisation?, at 41 (Sept. 17, 2013), http://www.oxfordmartin.ox.ac.uk/downloads/academic/The_Future_of_Employment.pdf. Frey is an expert in economics and business and Osborne is an expert in robotics. They believe that “for the work of lawyers to be fully automated, engineering bottlenecks to creative and social intelligence will need to be overcome . . . .” Id.

[3]. Id. at 37, 41. See also Michael Simkovic & Frank McIntyre, The Economic Value of a Law Degree, 43 J. Legal Stud. 249, 275 (2014) (“Predictions of structural change in the legal industry date back at least to the invention of the typewriter. Yet lawyers have prospered with the introduction and adoption of new technologies and modes of work—computerized and modular legal research through Lexis and Westlaw, word processing, citation software, electronic document storage and filing systems, automated document comparison, electronic document searching, e-mail, photocopying, desktop publishing, standardized legal forms, and will and tax preparation software. Although each of these was seen by some as a potentially damaging structural shift in the return to law, the law degree still offers a large earnings premium.”) (internal citations omitted).

[4]. See Andrew Abbott, The System of Professions 315 (1988); Eliot Freidson, Professionalism: The Third Logic (2001).

[5]. Floyd Norris, Under Obama, a Record Decline in Government Jobs, N.Y. Times (Jan. 6, 2012, 12:53 PM), http://economix.blogs.nytimes.com/2012/01/06/under-obama-a-record-decline-in-government-jobs.

[6]. See, e.g., David Yatte et al., D.C. Circuit Vacates FERC Rule on Pricing of Demand Response in Organized Energy Markets, Van Ness Feldman (May 27, 2014), http://www.vnf.com/getpdf.aspx?show=2909 (showing the impact the D.C. Circuit has on energy regulation).

[7]. See, e.g., Mark Wilson, The Latest in 'Technology Will Make Lawyers Obsolete!', FindLaw (Jan. 6, 2015, 11:39 AM), http://blogs.findlaw.com/technologist/2015/01/the-latest-in-technology-will-make-lawyers-obsolete.html.

[8]. For methodological precedents, see, for example, Riccardo Campa, Technological Growth and Unemployment: A Global Scenario Analysis, 24 J. Evolution & Tech. 86, 89 (2014); see also Peter Frase, Four Futures (2015) (describing four scenarios based on high or low levels of scarcity and hierarchy).

[9]. Gilles Deleuze, Postscript on the Societies of Control, 59 October 3 (1992), available at https://files.nyu.edu/dnm232/public/deleuze_postcript.pdf.

[10]. Lawrence B. Solum, Artificial Meaning, 89 Wash. L. Rev. 69 (2014).

[11]. Peter Reinhardt, Replacing Middle Management With APIs, ReinPK, http://rein.pk/replacing-middle-management-with-apis (last visited Apr. 12, 2015).

[12]. Danielle Keats Citron, Technological Due Process, 85 Wash. U. L. Rev. 1249, 1252 (2008).

[13]. Solum, supra note 10, at 75.

[14]. Id. This is an acceleration of processes endorsed in Jim Manzi, Uncontrolled: The Surprising Payoff of Trial-and-Error For Business, Politics, and Society (2012). Acceleration is a key theme in automation, and a social theory of acceleration is needed to address rapid automation. See generally Hartmut Rosa, Social Acceleration: A New Theory of Modernity (2013).

[15]. Solum, supra note 10, at 75.

[16]. For an account of the importance of protest to civic life, see Bernard E. Harcourt, Political Disobedience, in Occupy: Three Inquiries in Disobedience 45, 45 (2013).

[17]. See Jathan Sadowski & Frank A. Pasquale, Creditors Use New Devices to Put Squeeze on Debtors, AlJazeera Am. (Nov. 9, 2014, 2:00 AM), http://america.aljazeera.com/opinions/2014/11/debt-collection-technologystarterinterruptdevicesubprime.html.

[18]. See Claudia Goldin & Robert Margo, The Great Compression: The Wage Structure in the United States at Mid-Century, 107 Q.J. Econ. 1, 1 (1992).

[19]. Frank Pasquale, Access to Medicine in an Era of Fractal Inequality, 19 Annals Health L. 269, 275–76 (2010).

[20]. See Clay Michael Gillespie, Legal Consulting Firm Believes Artificial Intelligence Could Replace Layers by 2030, Hacked (Jan. 2, 2015), https://hacked.com/legal-consulting-firm-believes-artificial-intelligence-replace-lawyers-2030; Martha Neil, Susskind: Are Lawyers Becoming Obsolete?, ABA J. (Oct. 23, 2007, 6:40 PM), http://www.abajournal.com/news/article/susskind_are_lawyers_ becoming_obselete.

[21]. John O. McGinnis, Machines v. Lawyers, City J., http://www.city-journal.org/2014/24_2_machines-vs-lawyers.html (last visited Apr. 12, 2015).

[22]. Id.

[23]. See John O. McGinnis & Steven Wasick, Law’s Algorithm, 66 Fla. L. Rev. 991, 993–94 (2014), available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2130085.

[24]. These attorneys are highly paid not merely for navigating complex legal rules, but for advocacy that results in judicial decisions (such as expansive preemption rulings) that end the application of large bodies of law to substantial areas of business conduct. The extraordinary narrowing of the scope of antitrust law is also a case in point: a result of highly skilled advocacy by large firms’ attorneys and their coadjutants in the legal academy. See, e.g., Barry C. Lynn, Cornered: The New Monopoly Capitalism and the Economics of Destruction (2010); William Davies, Economics and the ‘Nonsense’ of Law: The Case of the Chicago Antitrust Revolution, 39 Econ. & Soc’y 64, 65 (2010) (explaining that the Law and Economics School focused competition policy on the goal of maximizing a stylized measure of consumer welfare).

[25]. David Auerbach, The Silicon Valley-ization of San Francisco, Slate (Dec. 20, 2013, 12:36 PM), http://www.slate.com/articles/technology/the_next_silicon_valley/2013/12/silicon_valley_s_invasion_of_san_francisco_not_quite_the_ayn_rand_nightmare.html.

[26]. Sedona Conference Working Grp. Series, The Sedona Conference Glossary: E-Discovery & Digital Information Management 18 (Sherry B. Harris ed., 3d ed. 2010).

[27]. See Wallis M. Hampton, Predictive Coding: It’s Here to Stay, Prac. L. J., May 2014, at 28, available at http://www.skadden.com/sites/default/files/publications/LIT_JuneJuly14_EDiscoveryBulletin.pdf.

[28]. See id.

[29]. See Nicholas M. Pace & Laura Zakaras, RAND Inst. for Civil Justice, Where the Money Goes: Understanding Litigant Expenditures for Producing Electronic Discovery 19–20, 21 fig.2.2, 41, 42 fig.4.1 (2012), http://www.rand.org/content/dam/rand/pubs/monographs/2012/RAND_MG1208.pdf; Stephanie Wilkins Pugsley, eDiscovery: It’s Time to Drop the ‘e’, 27 Utah B.J., 14 July/Aug. 2014, at 14, 16; Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, 17 Rich. J.L. & Tech. 1, 48 (2011).

[30]. Anthony Ha, LegalZoom Files for $120M IPO, Saw $156M in Revenue Last Year, TechCrunch (May 11, 2012), http://techcrunch.com/2012/05/11/legalzoom-ipo (“[M]ore than 20 percent of limited liability companies formed in California did so through LegalZoom.”).

[31]. See Janson v. LegalZoom.com, Inc., 802 F. Supp. 2d 1053 (W.D. Mo. 2011); In re Boettcher, 262 B.R. 94 (Bankr. N.D. Cal. 2001); Thomas v. State, 226 S.W.3d 697 (Tex. App. 2007).

[32]. See Isaac Figueras, Comment, The LegalZoom Identity Crisis: Legal Form Provider or Lawyer in Sheep’s Clothing, 63 Case W. Res. L. Rev. 1419, 1440 (2013) (“Courts note that LegalZoom can offer blank forms and instructions on how to use them, but LegalZoom may need to alter its document preparations services, stop offering general guidance on state laws, or remove some of the checkpoints where its employees review a customer’s legal document. Indeed, LegalZoom may need to alter its business model to make it more akin to a legal self-help kit than it currently is. Otherwise, LegalZoom may suffer from more legal challenges in the future.”) (internal citations omitted).

[33]. See Glenn Greenwald, With Liberty and Justice for Some (2012); see also Marcy Wheeler, FBI’s “Rich White Man” Hypocrisy: How New Policy Will Let the 1 Percent Skate Free, Salon (Feb. 5, 2015, 1:24 PM), http://www.salon.com/2015/02/05/fbis_rich_white_man_hypocrisy_how_new_policy_will_let_the_1_percent_skate_free (explaining that Section 702’s definition of serious crimes does not include white collar crime).

[34]. See Jerry Van Hoy, Franchise Law Firms and the Transformation of Personal Legal Services 40–41 (1997); Ray Worthy Campbell, Rethinking Regulation and Innovation in the U.S. Legal Services Market, 9 N.Y.U. J.L. & Bus. 1, 65 (2012).

[35]. Campbell, supra note 34, at 41.

[36]. See id. at 34, 52; Michael Ariens, Know the Law: A History of Legal Specialization, 45 S.C. L. Rev. 1003, 1007–10 (1994); William D. Henderson, Three Generations of U.S. Lawyers: Generalists, Specialists, Project Managers, 70 Md. L. Rev. 373, 379–80 (2011); Herbert M. Kritzer, The Future Role of “Law Workers”: Rethinking the Forms of Legal Practice and the Scope of Legal Education, 44 Ariz. L. Rev. 917, 919–20 (2002).

[37]. See Scott R. Peppet, Regulating the Internet of Things: First Steps Toward Managing Discrimination, Privacy, Security, and Consent, 93 Tex. L. Rev. 85, 92–95 (2014).

[38]. See Joseph Esposito, Thinking Through a Strategy for Digital Rights Management, Scholarly Kitchen (Apr. 23, 2012), http://scholarlykitchen.sspnet.org/2012/04/23/thinking-through-a-strategy-for-digital-rights-management.

[39]. See Annalee Newitz, How Copyright Enforcement Robots Killed the Hugo Awards, io9 (Sept. 3, 2012, 10:25 AM), http://io9.com/5940036/how-copyright-enforcement-robots-killed-the-hugo-awards.

[40]. Travis D. Breaux et al., A Distributed Requirements Management Framework for Legal Compliance and Accountability, 28 Computers & Security 8, 9 (2009).

[41]. Tal Z. Zarsky, Transparent Predictions, 2013 U. Ill. L. Rev. 1503, 1511.

[42]. Anil Kalhan, Immigration Surveillance, 74 Md. L. Rev. 1, 67 (2014).

[43]. Michael L. Rich, Should We Make Crime Impossible?, 36 Harv. J.L. & Pub. Pol’y 795, 802 (2013). But see Michael L. Rich, Limits on the Perfect Preventive State, 46 Conn. L. Rev. 883, 883 (2014).

[44]. Sonja B. Starr, Evidence-Based Sentencing and the Scientific Rationalization of Discrimination, 66 Stan. L. Rev. 803, 805 (2014).

[45]. See Primavera De Filippi, Ethereum: Freenet or Skynet?, Berkman Center for Internet & Soc’y Harv. U., https://cyber.law.harvard.edu/events/luncheon/2014/04/difilippi (last updated Jan. 31, 2015); Vitalik Buterin, Superrationality and DAOs, Ethereum (Jan. 23, 2015), https://blog.ethereum.org/2015/01/23/superrationality-daos.

[46]. Andreas Antonopoulos, Bitcoin Security Model: Trust by Computation, Radar (Feb. 20, 2014), http://radar.oreilly.com/2014/02/bitcoin-security-model-trust-by-computation.html (“Bitcoin is a distributed consensus network that maintains a secure and trusted distributed ledger through a process called ‘proof-of-work.’ Bitcoin fundamentally inverts the trust mechanism of a distributed system. Traditionally, as we see in payment and banking systems, trust is achieved through access control, by carefully vetting participants and excluding bad actors. This method of trust requires encryption, firewalls, strong authentication and careful vetting. The network requires investing trust in those gaining access.”).

[47]. Jerry Brito & Andrea Castillo, Mercatus Ctr. George Washington Univ., BitCoin: A Primer for Policymakers (2013), available at http://mercatus.org/sites/default/ files/Brito_BitcoinPrimer_v1.3.pdf.

[48]. Hans-Hermann Hoppe, Democracy: The God that Failed (2001).

[49]. Samir Chopra & Laurence F. White, A Legal Theory for Autonomous Artificial Agents 9–10 (2011).

[50]. Citron, supra note 12, at 1249.

[51]. Id. at 1268–69; David A. Super, An Error Message for the Poor, N.Y. Times (Jan. 3, 2014), http:// www.nytimes.com/2014/01/04/opinion/an-error-message-for-the-poor.html.

[52]. Erik F. Gerding, Code, Crash, and Open Source: The Outsourcing of Financial Regulation to Risk Models and the Global Financial Crisis, 84 Wash. L. Rev. 127, 134 (2009).

[53]. U.S. Fin. Crisis Inquiry Comm’n, The Financial Crisis Inquiry Report: Final Report of the National Commission on the Causes of the Financial and Economic Crisis in the United States (Jan. 2011), http://www.gpo.gov/fdsys/pkg/GPO-FCIC/pdf/GPO-FCIC.pdf.

[54]. Scott Patterson, Dark Pools: High Speed Traders, AI Bandits, and the Threat to the Global Financial System (2012); Sal Arnuk & Joseph Saluzzi, Broken Markets: How High Frequency Trading and Predatory Practices on Wall Street Are Destroying Investor Confidence and Your Portfolio (2012).

[55]. Arnuk & Saluzzi, supra note 54.

[56]. See Report of the Staffs of the CFTC and SEC to the Joint Advisory Committee on Emerging Regulatory Issues, Findings Regarding the Market Events of May 6, 2010, at 1 (Sept. 30, 2010), http://www.sec.gov/news/studies/2010/marketevents-report.pdf.

[57]. Id. at 5.

[58]. See id. at 79. Note also the disastrous $440 million loss of Knight Capital in August 2012 that was traced to IT and software issues at the firm that took nearly an hour to fix. Dan Olds, How One Bad Algorithm Cost Traders $440m, Reg. (Aug. 3, 2012, 9:32 AM), http://www.theregister.co.uk/2012/08/03/bad_algorithm_lost_440_million_dollars; Stephanie Ruhle et al., Knight Trading Loss Said to Be Linked to Dormant Software, Bloomberg (Aug. 14, 2012, 3:23 PM), http://www.bloomberg.com/news/2012-08-14/knight-software.html. Korean exchanges faced a smaller crash in late 2013. See High Frequency Trading and Predatory Market Making, Themis Trading 1 (Dec. 2013), http://blog.themistrading.com/wp-content/uploads/2013/12/RTL-HFT_Bibliography_2013.pdf.

[59]. Daniel Martin Katz, Quantitative Legal Prediction–or–How I Learned to Stop Worrying and Start Preparing for the Data-Driven Future of the Legal Services Industry, 62 Emory L.J. 909, 945 (2013).

[60]. See The E-Discovery Market Is Growing Fast, EDiscovery Bus. (Feb. 8, 2013), http://ediscoverybusiness.com/the-e-discovery-market-is-growing-fast.

[61]. See Pugsley, supra note 29, at 15.

[62]. Charles Yablon & Nick Landsman-Roos, Predictive Coding: Emerging Questions and Concerns, 64 S.C. L. Rev. 633, 646 (2013); Katz, supra note 59, at 943.

[63]. See Yablon & Landsman-Roos, supra note 62, at 666.

[64]. Pugsley, supra note 29, at 15; Jack G. Conrad, E-Discovery Revisited: The Need for Artificial Intelligence Beyond Information Retrieval, 18 Artificial Intelligence & L. 321 (2010).

[65]. See Conrad, supra note 64, at 324.

[66]. See Pugsley, supra note 29, at 15.

[67]. Katz, supra note 59, at 944 (noting that the ubiquitous use of email keeps eDiscovery costs high).

[68]. Fed. R. Civ. P. 26(b)(1).

[69]. Bob Rohlf & Scott Giordano, The Five Pillars of In-House Ediscovery, ACC Docket, Dec. 2012, at 42; see Dana A. Remus, The Uncertain Promise of Predictive Coding, 99 Iowa L. Rev. 1691 (2014).

[70]. 846 F. Supp. 2d 1335, 1351 (N.D. Ga. 2012).

[71]. See Rohlf & Giordano, supra note 59, at 42.

[72]. Harkabi v. SanDisk Corp. is another eDiscovery sanctions case based on an IT department’s policies and inability to find electronically stored information within a reasonable period of time. 275 F.R.D. 414, 421 (S.D.N.Y. 2010).

[73]. Orin Kerr, The Origin of “On All Fours”, Volokh Conspiracy (Dec. 16, 2006, 10:37 PM), http://www.volokh.com/posts/1166587868.shtml (“‘[O]n all fours’ . . . . means that the former case raises the same facts and legal principles as the latter and is therefore highly relevant as a precedent.”).

[74]. Andrea J. Paterson, Fee Agreements: Structuring Alternative Fee Agreements to Enhance Recovery of Fees and Align Interests of Attorneys and Clients, 35 Advocate 10, 10, 13 (2006).

[75]. John Sirman, Artificial Intelligence and the Law, 66 Tex. B.J. 17, 17 (2003).

[76]. Eric Talley & Drew O’Kane, The Measure of a MAC: A Machine-Learning Protocol for Analyzing Force Majeure Clauses in M&A Agreements, 168 J. Inst. & Theoretical Econ. 181, 182 (2012) (“[T]his human element is unavoidable (and even desirable), since the practice of law is in many ways the art of navigating between nuanced forms of expression and hard legal outcomes or predictions.”).

[77]. Harry Surden, The Variable Determinacy Thesis, 12 Colum. Sci. & Tech. L. Rev. 1, 3 (2011) (noting that only some areas of law are amenable to computer-generated solutions).

[78]. But see John O. McGinnis & Russell G. Pearce, The Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services, 82 Fordham L. Rev. 3041, 3053 (2014), available at http://ssrn.com/abstract=2436937 (showing that a new company, Lex Machina, uses historical court data to predict patent litigation outcomes).

[79]. Carol M. Rose, Crystals and Mud in Property Law, 40 Stan. L. Rev. 577, 578–79 (1988) (“[T]he straightforward common law crystalline rules have been muddied repeatedly by exceptions and equitable second-guessing, to the point that the various claimants under real estate contracts, mortgages, or recorded deeds don't know quite what their rights and obligations really are. And the same pattern has occurred in other areas too.”).

[80]. See L. Karl Branting, A Reduction-Graph Model of Precedent in Legal Analysis, 150 Artificial Intelligence 59, 64 (2003).

[81]. See id.; Ronald E. Wheeler, Does WestlawNext Really Change Everything? The Implications of WestlawNext on Legal Research, 103 L. Libr. J. 359, 366 (2011) (describing the downside of search results that are partly based on crowdsourcing, especially on unusual problems in the law and cutting-edge research).

[82]. See Edwina L. Rissland, Artificial Intelligence and Law: Stepping Stones to a Model of Legal Reasoning, 99 Yale L.J. 1957, 1967 (1990) (“[T]he rule-based approach assumes that the set of rules has no inherent difficulties, like ambiguities, gaps, and conflicts. To make a rule-based system work, the programmer must usually eliminate these problems and make the rules appear more consistent and complete than they are.”).

[83]. See Steven Levy, The AI Revolution Is on, Wired (Dec. 27, 2010, 12:00 PM), http://www.wired.com/magazine/2010/12/ff_ai_essay_airevolution; Frey & Osborne, supra note 2; Marcello Ceci & Aldo Gangemi, An OWL Ontology Library Representing Judicial Interpretations, Semantic Web J., http://www.semantic-web-journal.net/sites/default/files/swj323_0.pdf (last visited Apr. 12, 2015).

[84]. See, e.g., Adam Zacary Wyner, Weaving the Legal Semantic Web With Natural Language Processing, VoxPopuLII (May 17, 2010), http://blog.law.cornell.edu/voxpop/tag/general-architecture-for-text-engineering/ (describing the complexity of making natural language machine readable for legal purposes, and discussing the use of GATE—general architecture for text engineering—to make it easier to search large databases for legal professionals, not for laypersons).

[85]. Sirman, supra note 75, at 17.

[86]. Filippo Galgani et al., HAUSS: Incrementally Building a Summarizer Combining Multiple Techniques, 72 Int’l J. Hum.-Computer Stud. 584, 586 (2014).

[87]. See Sirman, supra note 75, at 17.

[88]. Darla Jackson, Watson, Answer Me This: Will You Make Librarians Obsolete or Can I Use Free and Open Source Software and Cloud Computing to Ensure a Bright Future?, 103 L. Libr. J. 497 (2011).

[89]. See Kevin D. Ashley & Stefanie Brüninghaus, Computer Models for Legal Prediction, 46 Jurimetrics 309 (2006).

[90]. See McGinnis & Pearce, supra note 78, at 3048.

[91]. See Frey & Osborne, supra note 2, at 40.

[92]. See Adam Zachary Wyner, Weaving the Legal Semantic Web With Natural Language Processing, VoxPopuLII (May 17, 2010), http://blog.law.cornell.edu/voxpop/tag/general-architecture-for-text-engineering.

[93]. See Frey & Osborne, supra note 2, at 40; McGinnis & Pearce, supra note 78, at 3050.

[94]. See McGinnis & Pearce, supra note 78, at 3046.

[95]. See id.

[96]. See Goldin & Margo, supra note 18, at 1 (stating that “compression” refers to the narrowing of most incomes: there are fewer billionaires, but also fewer impoverished persons).

[97]. See id.

[98]. Dodd-Frank Wall Street Reform and Consumer Protection Act, 12 U.S.C. §§ 5301–5641 (2012) (regulating certain banking practices); Patient Protection and Affordable Care Act, 42 U.S.C. §§ 18001–18121 (2012) (using government funds to provide insurance for all).

[99]. See James K. Galbraith, Created Unequal: The Crisis in American Pay (1998).

[100]. Frank Pasquale, Capital’s Offense: Law’s Entrenchment of Inequality, boundary2 (Oct. 1, 2014), http://boundary2.org/2014/10/01/capitals-offense-laws-entrenchment-of-inequality/.

[101]. Thomas Geoghegan, See You in Court: How the Right Made America a Lawsuit Nation (2006) (explaining how stingy social welfare provision in the United States increases litigation in tort and other contexts).

[102]. See, e.g., Steven M. Teles, Kludgeocracy: The American Way of Policy, New America Foundation (Dec. 2012), at http://newamerica.net/sites/newamerica.net/files/policydocs/Teles_Steven_Kludgeocracy_NAF_Dec2012.pdf (“From the mind-numbing complexity of the health care system (which has only gotten more complicated, if also more just, after the passage of Obamacare), our Byzantine system of funding higher education, and our bewildering federal-state system of governing everything from the welfare state to environmental regulation, America has chosen more indirect and incoherent policy mechanisms than any comparable country.”).

[103]. Mark Scott, Limit ‘Right to Be Forgotten’ to Europe, Panel Tells Google, N.Y. Times (Feb. 6, 2015, 5:38 AM), http://bits.blogs.nytimes.com/2015/02/06/limit-right-to-be-forgotten-to-europe-panel-says.

About the Author

Frank Pasquale is Professor of Law, University of Maryland Francis King Carey School of Law; Affiliate Fellow, Yale Information Society Project; Member, Council on Big Data, Ethics, and Society. Professor Pasquale wishes to thank Michael Madison and Bonnie Kaplan for valuable conversations on the promise and limits of automation.

Glyn Cashwell is a Senior Systems Engineer, Vistronix.

By uclalaw