Tuesday, February 18, 2020

The Future of Social Ordering

When might future courts operate more like self-driving cars, and when like auto-piloted planes? When might future legal proceedings still require human attorneys and firm handshakes? When I want to ask such questions, I pose them to Tim Wu. This present conversation focuses on Wu’s recent Columbia Law Review article “Will Artificial Intelligence Eat the Law? The Rise of Hybrid Social-Ordering Systems.” Wu is the Julius Silver Professor of Law, Science and Technology at Columbia Law School. (...)

Automated judicial decision-making raises predictable sci-fi-infused anxieties about which most precious human qualities our judicial process stands to lose. Your paper remains somewhat agnostic on what this “special sauce” for a functional and credible court system might entail, but you do point to a cluster of concerns that arise in any number of AI-related conversations, particularly concerns of perverse instantiation (in which digitized mechanisms somehow fail to realize our true aims), and of unaccountability (in which we cannot assess certain decisions produced by a computer program operating far beyond our own cognitive capacities). Could you flesh out those problematic prospects with a couple of present-day and near-future examples? And could you offer any comparable possibilities here of AI helping to improve on the judicial special sauce — beyond just freeing up time by farming out the most predictable cases?

Have you ever felt angry or frustrated when your computer or some site doesn’t function properly, and when you then encounter the complete lack of human accountability that is the essence of software-based systems? I think that captures the worst of what “robotic justice” could become: impersonal, inhumane, and unflinching. As scholars Richard M. Re and Alicia Solow-Niederman put it, software decision-systems already have a bad tendency of being “incomprehensible, data-based, alienating, and disillusioning.”

We’re already familiar with the automated speed traps that mail you a ticket if you drive too fast. It is not hard to imagine a future where the combination of pervasive surveillance and advanced AIs leads to the automated detection and punishment of many more crimes: say public littering, tax evasion of any kind, conspiracy (agreements to commit a crime), possession of obscene materials, or threatening the President.

Automated enforcement of these crimes would, perhaps, make for a more orderly society, since many of the laws I just mentioned are under-enforced. Yet you don’t need to be a committed civil libertarian to wonder about a future of being constantly watched, accused, and potentially convicted, all without human involvement.

But despite the examples I’ve just given, I still think that full automation of most aspects of our legal system will remain implausible for quite some time. And in this paper I want to stress why problems in legal decision-making may be particularly resistant to full automation.

When it comes to making complex AI-augmented decisions, what counts as “success” differs from one field to the other. Consider medical diagnosis: as a patient, if AI will give you a more accurate reading than a human doctor, you probably have little reason to prefer the human doctor just because she’s human. If self-driving cars come to have fewer accidents than cars operated by humans, same thing. In each example, we tend to defer to a relatively objective metric of success or failure.

In the law, however, success (or certainly “justice”) can’t be so easily measured. Legitimacy and procedural fairness can play a large role in what one considers a just result. Decisions might put people in prison for the rest of their lives, or even put someone to death — and how that decision is arrived at seems to matter a great deal. Even if somebody writes a program that, on evidence, tends to outperform the average trial jury in terms of compliance with the law as written, I doubt many of us would accept as legitimate that program’s determination of guilt or innocence for a serious crime.

In a typical commercial dispute, meanwhile, both sides usually think they are right. The cases that get litigated, as opposed to settled, could usually go either way. Hence the quality of a decision often has less to do with the particular claims before the judge (or judges), but with how the logic of this decision, as precedent, will fare as a rule of decision for future cases.

These are just a few of the calculations that makes a metric for high-quality legal decisions difficult to arrive at. Those who have spent time in legal theory know there are other challenges as well, such as Karl N. Llewellyn’s distinction between the written rules and the real rules — with the law, in the hands of a fluent judge or lawyer, rarely operating precisely as it has been written. This makes the potential for absurd or even dangerous results through perverse instantiation very high. In fact, even without AI, the legal system remains highly prone to yielding absurd results. One basic job of a judge involves preventing lawyers from abusing the system to achieve such ends.

I do concede though that some AI advocates might view all these concerns as epiphenomenal or empty. With the legitimacy of computerized adjudication, for example, it may just be a matter of time. Maybe right now we still can’t accept the idea of a computer finding a prisoner guilty. But if you told someone from an earlier generation that we could trust a computer to fly an airplane or dispense large sums of cash without supervision, they’d surely look at you funny.

And it isn’t as if humans are perfect. “Judgment” can be another word for bias. In the US, African Americans regularly get arrested and convicted of minor crimes for which white people might get a pass. So if, over time, software proves itself fairer and more accurate in legal decision-making (less subject to such biases, more likely to weigh objectively all of the evidence), then perhaps we’ll come to regard computerized judges as more legitimate than human judges or juries. As the Bitcoin believers like to say: “In code we trust.”

by Andy Fitch, LARB |  Read more:
Image: Tim Wu, uncredited