Bloomberg Law
July 12, 2016, 4:48 PM UTC

Time to Regulate AI in the Legal Profession? (Perspective)

Wendy Wen Yu Chang
Wendy Wen Yu Chang
Hinshaw & Culbertson LLP

Editor’s Note: The author of this post is a member of the American Bar Association’s Standing Committee on Ethics and Professional Responsibility.

Artificial intelligence (AI) has become an increasingly common word in our legal lexicon over the past year. While AI is not new, ROSS Intelligence’s announcement of its partnership with a handful of big law firms this past spring kicked off a wave of vibrant conversation about AI in the legal profession.

AI is the use of automated, computer-based means by which large amounts of data are processed and analyzed to reach reasoned conclusions.

The technology’s potential benefits are immense. As projected, AI will be able to reach reasoned conclusions that have the potential to outpace the human mind’s ability at a significantly cheaper cost and with increased speed, accuracy and consistency. Proponents contend that the cost savings will permit services to be offered at lower rates, which will bridge a current access-to-justice gap. That gap manifests itself in 80 percent of those needing legal services not being able to afford (and thus use) them.

The present argument is therefore that AI does not threaten the legal industry, but rather will change and work in tandem with it. Perhaps. At least as the AI technology commonly exists right now.

But AI’s application is much broader. In this post 2008 Time of Shedding and Cold Rocks world, we have seen both the contraction of available legal work and an increasing client demand for efficiency.

AI is a tool with which law firms and their clients can try to address this new legal landscape. Better services. Faster. Cheaper. What could be wrong with that, proponents argue? The resulting further contraction of legal work available is nothing more than the natural progression of technology’s impact, a road we are on even without AI. Clients, they say, will be better served and happier.

In this, AI proponents are not wrong. But in chasing the amazing possibilities, the profession must not forget the fundamentals. In our race to use technology to be better, faster and cheaper, we must not forget that the law’s effect will always endured by humans. And at least under the current technology, AI still needs humans — someone has to create the machines, write the programs, feed in the data, have problems to solve, and confirm the accuracy of conclusions. This human factor is, and will always be, inescapable. In that humanity, we cannot ignore the danger of a failure of competence.

Lawyers are not strangers to the technology’s ethical implications on the profession.

The core legal ethical duty of competence requires that a lawyer employ legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation. Model Rule 1.1. To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, “including the benefits and risks associated with relevant technology….” Model Rule 1.1, Comment [8]. Lawyers must protect confidential client information. Model Rule 1.6. Lawyers have a duty to supervise those they work with. Model Rule 5.1, 5.3. Lawyers must exercise independent judgment. Model Rule 2.1.

All of this adds up to a basic fundamental rule: in using technology, lawyers must understand the technology that they are using to assure themselves they are doing so in a way that complies with their ethical obligations — and that the advice the client receives is the result of the lawyer’s independent judgment.

What if the lawyer using AI, in his or her unintentional lack of knowledge of how to use the technology, feeds the wrong data, and asks the wrong question? What is the effect? The wrong answer will result. But will anyone know it is wrong? It looks and feels the same at the end point.

A lawyer might be able to recognize anomalies due to his or her legal training, and know enough to test the answer, ask a different question, or adjust the data. If the lawyer is looking, that is. We have seen in the e-discovery world over the past decade how easily some lawyers might abdicate responsibility and blindly trust the technology, only to get it horribly wrong.

And we have seen the ethics law develop that such abdication of responsibility is not ethically permissible. See, e.g. California Formal Ethics Opinion 2015-192 and authorities cited therein (e-Discovery competence). A lawyer must know, test, look, supervise, understand, and make all necessary adjustments so that while he or she may be using AI as a tool, the ultimate advice is still independently his or hers and is ethically compliant. Requiring lawyers to adhere to these standards provides necessary client protection.

The ultimate danger is how competent it all looks. Technology, especially AI technology, can be deceptive because its inner workings are invisible to the naked eye. A user cannot see what is going on behind the scenes. One asks a question, and the answer appears.

But in this, even assuming the user feeds the correct information into the computer, he or she must still intrinsically trust that the computer is doing what it says it is doing. Is the program actually doing what it says it will?

A lawyer, at least, is ethically required not to blindly accept the answer, and is trained to perhaps spot mistakes. Lay people accessing AI legal services directly without a lawyer have no such advantage, and might not know that something is wrong until they have relied on the wrong answer and taken a legal step, and it is too late.

All they had was the promise that the computer would apply the law to their factual situation, and provide an answer to their legal problems at a price they could afford. When an entity or individual not licensed to practice law makes and acts on this promise, we call it the unauthorized practice of law. We do not tell clients that if they choose to use an unlicensed entity or person because it is cheaper, they do so at their own risk. Why should there be a different result with an AI service, if the same underlying promise and application of law to individual facts have occurred?

The answer cannot be that a computer cannot possibly practice law. The same actions have occurred. The computer does not exist in a void. Nor does it function on its own. Humans are involved and will be the ones who will collect any profit that ensues. AI legal services should not be permitted to hold themselves out as providing legal services without an actual lawyer’s involvement and supervision.

There is big money in AI. Technology companies can create AI prototypes and make sweeping promises. But not every provider delivers on those promises.

Who will regulate the providers and require quality standards? It does not have to be the same set of rules that lawyers abide by. It can be independent. But right now, there is no regulatory scheme. This creates significant uncertainty for both the legal profession and the AI legal technology industry itself, which do not know what they can and cannot do with any commercial certainty. At least for now, there is also significant danger to the public at large. Stepping into this regulatory void is a necessary step, but the opportunity to do so does not exist indefinitely.

The industry is moving along without us. Very quickly. We must act, or we will be left behind.

The views expressed herein are the author’s own.

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.