Bloomberg Law
March 29, 2019, 8:01 AM UTC

INSIGHT: Four Principles for the Trustworthy Adoption of AI in Legal Systems

Eileen M. Lach
Eileen M. Lach
IEEE
Nicolas Economou
Nicolas Economou
H5

The ascent of intelligent machines continues unabated in all realms of life and all institutions of society, including in the world’s legal systems.

As, respectively, the former general counsel of the world’s largest technical professional organization and the co-founder of a company that pioneered the use of artificially intelligent systems in civil and criminal discovery more than 15 years ago, we believe in the promise of science. AI offers tantalizing opportunities to improve access to justice, reduce bias, and advance both the functions of the law and the values that animate it.

But the surrender of human decision-making to machines entails the dystopian risk of a dehumanized legal system, which mindlessly perpetuates biases, sacrifices the spirit of the law in pursuit of efficiencies, undermines legal institutions, destabilizes jurisprudence, and corrodes public trust.

We can reap the benefits of artificial intelligence in the law while mitigating its risks, but only if we develop a normative answer to the central question facing us: “When it comes to the legal system, to what extent should society entrust to artificial intelligence decisions that affect people?”

Laudable efforts to address this question have resulted in meaningful but fragmentary contributions that focus on a particular legal application of AI or on the needs of a specific constituency. Norms for the trustworthy adoption of AI in the legal system, considered in its entirety as an institution accountable to the citizen, have been long overdue.

The IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems has addressed this challenge in Ethically Aligned Design (EAD1e), “A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems.”

This groundbreaking, multidisciplinary effort, comprising contributions from a group of international experts in domains ranging from law, ethics and philosophy to computer science and engineering, articulates four principles for the dependable adoption of AI in legal systems. These principles are constitutive of informed trust and are designed to be:

  • individually necessary and collectively sufficient,
  • globally applicable but culturally flexible, and
  • capable of being operationalized.

Principle 1: Effectiveness

An essential component of trust in a technology is that it succeeds in meeting its intended purpose. A familiar example is car safety. We rely on Insurance Institute for Highway Safety (IIHS) crash-test ratings to help us assess whether manufacturing systems, which most of us don’t understand, produce safe cars.

In the US legal system, the National Institute of Standards and Technology (NIST) conducted analogous studies between 2007 and 2011 to assess the effectiveness of artificially intelligent systems for electronic discovery. Some systems produced impressive results, some less so (a follow-up study showed that two of the systems evaluated outperformed humans).

As a corollary benefit, courts have increasingly espoused the metrics of effectiveness used by NIST—“precision” and “recall.” These metrics are conceptually akin to the IIHS ratings: courts, litigants, and society can understand them and rely on them to deploy AI-enabled processes in legal discovery.

Principle 2: Competence

We surrender our bodies to scalpel-wielding humans (or our water pipes to plumbers) only because society or professional bodies have devised norms that define the credentials such people must possess.

AI-wielding judges, lawyers, and law enforcement personnel determine their own personal fitness to operate or interpret artificial intelligence, using AI as an aid to their professional judgment. Yet, effectively combining human and artificial intelligence is a science in which most operators of AI in the law are simply not competent.

The legal system cannot rely on AI operators practicing science without a license any more than the medical field can. The trustworthy adoption of AI in the law requires the development of professional gauges of competence in the use and interpretation of AI, via professional education and accreditation.
.

Principle 3: Accountability

An essential component of trust in a technology is confidence that it is possible to apportion responsibility among the human agents who create, procure, deploy, and operate it. With no mechanisms to hold these agents accountable, it is difficult to assess responsibility for undesirable outcomes under any framework, legal or otherwise.

When a judge relies on a biased black-box algorithm to mete out an inappropriately lengthy criminal sentence to a minority defendant, who is accountable? The algorithm’s manufacturer, the government agency that procured it, or the participants in the process who failed to understand (or persuade the court of) the algorithm’s limitations?

The answer, today, seems to be: nobody, and let the defendant bear the brunt!

Principle 4: Transparency

Is there sufficient information to determine the extent to which (and purposes for which) an AI-enabled process is entitled to be trusted?

In some circumstances, transparency into the evidence supporting Principles 1-3 may suffice. At other times, an oversight authority may be required to protect both the consumer, who is unlikely to be able to understand the algorithms, and the purveyor of AI, who strives to protect intellectual property. In other instances, such as when an AI-enabled process used in sentencing manifestly fails Principles 1-3, transparency into algorithms and the underlying data may be the sole available path to determining the extent to which the process should be trusted (or mistrusted).

These four principles provide a definition of informed trust that can be operationalized in applications of AI in the legal system, from law-making to civil and criminal justice, and law enforcement. Important work remains to be done: determining what types of empirical evidence constitute satisfactory proof that each principle has been duly implemented, and defining what metrics and accreditations can serve as instruments to enable the trustworthy adoption of AI or use within the legal system.

IEEE’s work sets us solidly on the path forward to completing that work.

This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.

Author Information

Eileen M. Lach is in on the executive committee for the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and on the Board of Directors and Office of the CEO for IEEE GlobalSpec Inc. She was general counsel and chief compliance officer of IEEE from 2011-2018 and previously served as vice president, corporate secretary and associate general counsel of Wyeth and general counsel of Amnesty International.

Nicolas Economou is the chief executive of H5. He chairs the Law Committee of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and leads the Law and Society Initiative at The Future Society. He is also a member of the Council on Extended Intelligence (CXI), a joint initiative of the MIT Media Lab and IEEE-SA.

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.