Bloomberg Law
Jan. 19, 2017, 4:13 PM UTC

Don’t Worry, Attorneys: AI Comes In Peace (Perspective)

A.J. Shankar

Editor’s Note: The author holds a PhD in computer science and founded an e-Discovery software company.

But this time it’s different. Self-driving cars are roaming the highways. This year’s Consumer Electronics Show is packed with devices that channel Amazon’s AI-powered, voice-activated personal assistant, Alexa. And early last year, Google’s AlphaGo program rendered the world’s greatest Go Master “speechless” by beating him in what he described as a “nearly perfect game.”

Every day seems to bring another headline about AI’s newfound prowess. The latest comes from the New York Times, in a lengthy feature on Google Brain, a Google research project pushing the boundaries of AI. The article includes a startling anecdote about what happened when Google applied its latest AI technology to its 10-year-old Translate service. Suddenly Translate began converting English literature into Japanese — and back — with a fluency that stunned native speakers. And it kept getting better. By the end of this year, Translate was improving overnight to a degree “roughly equal to the total gains the old one had accrued over its entire lifetime.”

AI isn’t somewhere over the horizon anymore. It’s coming, fast. And once people get past the inevitable Skynet jokes, they start wondering how AI will affect them — in particular how it will affect their jobs. Everyone remembers how automation upended the manufacturing industry. Will artificial intelligence do the same for the professions — what is often called knowledge work?

AI will undoubtedly affect white-collar jobs. It stands to make some obsolete. But as a legal technologist, I’m quite sure it won’t replace attorneys any time soon. In fact, attorneys are some of the people who stand to benefit most from its advance.

Data and context

To understand why, it’s important to understand what AI is, what it can do and what it can’t. And to do that, we need to clarify our terms.

Part of the confusion about AI comes from our tendency to humanize algorithms. We say a digital thermostat “notices” or “realizes” that we turn up the heat at certain times of day, and “anticipates our needs” by raising the temperature for us. That’s true in a limited sense. But the algorithm that drives this behavior is not intelligent in any meaningful way. It doesn’t “notice,” “realize” or “understand” the way we do. Most of what computers do is not AI by any standard. Yet we dignify some of it with the name “artificial intelligence.” What makes these cases special?

When experts use the term AI, they usually mean some kind of machine learning system — one that can adapt to, and learn from, input. A traditional thermostat, which simply applies rules (e.g. “If the ambient temperature exceeds 65 degrees F, turn on the heat”), does not qualify. A thermostat that learns, might. (If we’re being scrupulous, we might call it “a kind of rudimentary artificial intelligence that falls well short of, but shares certain important qualities with, the real thing.”)

Experts further recognize two categories of machine learning: supervised and unsupervised. Supervised machine learning adapts through feedback. Generally, a supervisor will train the AI using a series of labeled inputs. They might be tagged images. They could also be documents, sounds or nearly anything else, depending on what the system is designed to do. The labels are considered “ground truth” and inform the system what it is learning. The system processes the training data and receives feedback on how it evaluated each input, right or wrong. The system is then adjusted based on the feedback, and the training continues until the AI performs as desired.

By contrast, unsupervised AI needs no training. It analyzes the data to figure out, on its own, what information is important and what is background noise — similar to how many researchers think the human brain learns. Unsupervised machine learning still exists mostly in the realm of theory and science fiction, but there have been surprising recent developments. The most notable is Google Brain so-called Cat Paper, in which Google demonstrated that an unsupervised machine learning system could identify a cat in a photograph without ever having been exposed to a cat. But as impressive as this is, it’s still an early experiment, not suitable for commercial use. For the foreseeable future, AI in the workplace will be the supervised kind.

The most promising of these is the neural network. As the name suggests, neural networks function something like neurons in a brain. Signals enter the network and traverse a series of nodes. Each node applies algorithms to steer the signal to its ultimate destination. The nature of the signal and the routing decisions of the nodes determine the output of the system.

Neural networks are not new. They date back to the 1940s. But over the past decade, the rise of cloud computing, better algorithms, abundant data and cheap graphic processing units (graphics chips originally designed for gaming turn out to be perfectly suited to AI) enabled the development of neural networks much larger, more complex, and more effective than any previously created.

The most advanced use a technique called “deep learning.” Instead of one layer of nodes, deep learning networks contain many. Deep learning systems are responsible for tremendous recent leaps in AI performance, in fields like image recognition, language translation and self-driving vehicles.

But like any supervised machine learning, deep learning doesn’t work “out of the box.” Initially, a neural network has no way to distinguish between, say, pictures of cats and dogs. You have to run of thousands — ideally millions — of cat and dog photos through the network, each time adjusting the nodes to give better results. Without training, the AI is useless. With a little training, it’s not much better. But once you can train it on thousands, hundreds of thousands or, better still, millions of examples, you can achieve startling levels of accuracy.

The catch is that deep learning systems, with their multiple layers of nodes, require even more training than traditional neural networks. Imagine the nodes in a deep learning network as players on a basketball court executing a set play. If you change the angle of the first player’s pass, the second player has to move. That changes the angle of the next pass, and the next, and so on to the end of the chain. Each change to the system creates a ripple of additional changes. The goal is to find the best configuration of players for a particular play. But the number of possible combinations is enormous, and for this reason, tuning a deep learning system takes a huge number of iterations—and a correspondingly huge amount of data. It’s no coincidence the greatest advances in AI are coming from companies like Google, Amazon and Facebook, which have a practically infinite supply of user data at their disposal.

AI and the law

That brings us to one of the two main reasons AI won’t eclipse attorneys. AI thrives on data, and legal cases simply don’t have enough of it. A training set in a legal matter might consist of a thousand documents. That sounds like a lot, but it’s dwarfed by training sets for deep learning systems, which might be thousands of times as large. As a rule, the less training data you have, the less capable your AI can become — and the more it will need human supervision.

The second reason is related, and perhaps even more important. Training an AI requires not just a lot of data, but data from the same general context. Supervised AI excels at tasks whose context doesn’t change from instance to instance. The practice of law is nothing like that. It’s a bewildering collision of contexts: language, business operations, societal structure and mores, law, slang, implication. It’s subjective and ill-defined, logical and emotional. Gray areas abound. It’s the sort of environment where humans excel — and AI shows its limitations.

To take one example, some optimists believe AI may soon be used to draft contracts. The idea seems based on the assumption that writing contracts is a mechanical task. In fact it’s quite subtle. On the one hand, precision is important. If a contract isn’t worded just so, the consequences could mean bankruptcy. On the other hand, ambiguity is often desirable. Sometimes you want to leave room for maneuver. The contract writer’s task, then, is to make the wording simultaneously exact and unclear. Shade the meaning where needed. Spell it out when necessary. Early attempts at “smart contracts” show that technology is dangerously far from accomplishing this feat. No AI we have now or can expect in the near future is likely to do better.

Does that mean AI has no place in the legal industry? Not at all. One application has already gone from curiosity to commonplace. Predictive coding — or as it is sometimes called, technology-assisted review — is a good example of the kind of AI we can expect to see more of in the legal workplace.

Predictive coding uses machine learning to quickly identify documents relevant to a legal matter. In most instances, a senior attorney trains the system by “coding” a sample set of documents. In others, the technology learns as it goes by monitoring document reviewers. Either way, once trained, the predictive coding system applies the human attorneys’ judgments across the entire collection, spotting relevant documents faster and, in many cases, more accurately than a human would.

This kind of coarse-grained analysis is fine if you just need to determine a document’s responsiveness. Predictive coding does a superb job making millions of relatively easy decisions. It can dramatically reduce attorneys’ drudge work. But to identify and really understand the most important evidence in a case, humans are still essential. At best, predictive coding can point in the right direction and say, “Look here first” — which has tremendous value, to be sure. However, it can’t make the leap from “probably relevant” to “actually critical to this case”. It takes humans to do that.

Document review is just one area of legal practice, but the same principles apply across the industry. Because training data is limited, and legal contexts are constantly changing, AI can only accelerate attorneys’ work, not replace it.

That isn’t true of every profession. Certain skilled jobs are vulnerable to AI. These tend to involve narrowly defined problems with a rich supply of training data. Radiology is a good example. Given enough X-rays, AI systems have become quite good at diagnosis. Over time they’ll only improve.

But law is neither narrow nor data-rich. In the legal workplace we’re likely to see what might be called “hybrid AI” — systems that are neither manual nor automated, but a partnership between man and machine. Predictive coding, above, is one such hybrid. Many more are coming.

For example, while AI probably won’t draft contracts, it might do quite well at analyzing them. We can imagine contract analysis systems that use AI to spot potential contradictions or overlooked exposure. Or a research tool that nudges the attorneys with suggested prior case law. Startups are already working on applications like these, and there’s every reason to think they’ll succeed. In each instance, the AI serves as the quick and tireless assistant. Attorneys pull together the information the AI assembles. They ponder its significance, they decide what matters and they marshal it to build a case. That’s the coming future.

Artificial intelligence will undoubtedly transform the practice of law — and indeed, the world, sooner than we might have thought. Give AI enough information, and it can do amazing things. They just aren’t the same things an attorney can do. Ironically, the advent of AI reveals more clearly than ever what a deep, complicated and fundamentally human endeavor the law really is.

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.