Courts and lawyers should address emerging ethical and legal issues related to the use of Artificial Intelligence, a resolution up for a vote at the American Bar Association’s annual meeting proposes.

Legal technology experts welcomed the proposal by the ABA’s science and technology law section to be put before the membership group’s House of Delegates in coming days in San Francisco, but said it’s long overdue.

“We’re falling in the wake of other organizations trying to get a handle on this,” said Sharon Nelson, a science and tech law section member and co-founder of Sensei Enterprises, a digital forensics, information technology, and cybersecurity small business.

The legal profession is known to be resistant to change, and it’s been slower than other industries to embrace modernization through legal technologies. But tech experts say lawyers’ lack of fundamental understanding of them will soon put them at a disadvantage if they don’t change their ways.

“Artificial intelligence promises to change not only the practice of law but our economy as a whole. We clearly are on the cusp of an AI revolution,” the resolution says.

If it’s adopted, the science and technology law section hopes to establish a working group to define guidelines for legal and ethical AI usage and to develop a model standard it would submit for adoption at a future ABA House of Delegate’s meeting.

Questions, Some Answers

The resolution is one of 57 to be voted on by the ABA’s policy-making body at the upcoming meeting.

It urges the legal profession to address “emerging ethical and legal issues related to the usage of artificial intelligence,” including bias in automated decisions made by AI and oversight of the technology, and the vendors that provide it.

AI tools, for example, can be used by law firms to assist in legal research, trim document review time, and help revise contracts.

Even though AI has only recently made great strides in the legal profession, the resolution should have been adopted a while ago, Nelson said.

But it raises interesting points and asks key questions, said Nicolas Economou, chief executive of H5, which provides search and review, data analytics, and eDiscovery for litigation and investigations.

For instance, one question that’s often overlooked but addressed in the resolution is “Does this thing work?” said Economou, who also chairs the Law Committee of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Unfortunately, what’s missing from the resolution is the answer, he said.

Another point the resolution raises is that legal AI should be “audited and auditable,” which speaks to its transparency and trustworthiness, Economou said.

Nelson concurred that lawyers need to know more about the AI they’re using. Every AI is different, so we should be asking what data was used to develop it and by whom, she said.

It’s possible for lawyers to serve clients better with AI but we need to know about what we’re using, Nelson said. And we haven’t really “looked behind the curtain” enough, she said.

Transparency is also vital for examining bias in AI.

“For all the advantages that AI offers for lawyers, there also is a genuine concern that AI technology may reflect the biases and prejudices of its developers and trainers, which in turn may lead to skewed results,” the resolution says.

The resolution cites as an example the Correctional Offender Management Profiling for Alternative Sanctions software, used by some courts to predict the likelihood of recidivism in criminal defendants. COMPAS has been shown by studies to be biased against blacks.

This is an example of “black box” AI, where we can’t see into it and find out how it gets results, Nelson said.

To combat bias, Economou advocates for a kind of “clinical trial” for AI, where the goal is a bias-free outcome for a particular AI process, and tests are run to measure it. He compares it to car manufacturing. Most of us don’t know how they’re made but we know the crash tests will tell us if they meet the goal of being safe, he said.

AI for All

Eliminating bias and increasing transparency is especially important because lawyers have greater access to AI than several years ago, in part due to it becoming more affordable, Nelson and Economou said.

With the democratization of access to AI, it “should be beneficial (or at least not detrimental) to the lawyer, the court, clients, and society in general,” the ABA resolution says.

Economou agrees that there needs to be a “trustworthy adoption of AI,” in the legal system, which would also include resolving the question of how to apportion responsibility if something does go wrong.

Right now, this is an open question, Nelson said. “Where’s the liability with AI? With programmers? manufacturers? Law firms that hire outside vendors?” she asked.

For attorneys using AI or exploring its use, it’s “critical” to attend CLEs taught by ethicists and practicing attorneys to help them navigate these waters, she said.