Automating Attorneys: Hope or Hype?

Editor’s Note: The author of this post is a data scientist at Paul Hastings, and author of a column called “Matters of Opinion.”

By Tom Barnett, Special Counsel, Paul Hastings, LLP

We are bombarded every day with the idea that artificial intelligence (AI) is changing the world right before our eyes. Is it going to catapult us into prosperity, happiness and fulfillment? Or is it a scourge upon the land ready to steal our jobs, numb our minds and act as a gateway drug to totalitarianism? That depends on your worldview. The undeniable truth is that machine learning algorithms are assisting, and in some cases even replacing, human decision-making in everything from medical diagnoses to investment strategy to building design. Is the future a stark binary choice between Nirvana and the apocalypse, or is there a more mundane, but realistic, middle ground?

The practice of law plays an interesting role in this debate. Major changes are taking place in how legal services are delivered, how fees are calculated and how legal work is performed. There are countless opportunities to take advantage of the dramatic recent technological advances. But at the same time, the highly risk-averse legal profession is almost by nature, one of the slowest adopters of new technology. So how AI will ultimately impact the legal profession is still a very open question.


Hype-ocalypse Now.

What is not an open question is the role of AI as an incredible, or more accurately, not credible, source of hype. The deluge of delusion comes in two distinct flavors: utopian and apocalyptic. Advertisements predictably provide utopian visions of touching, humorous and sensitive interactions between humans and machines while news articles and interviews cover the full spectrum of hyperbolic hysteria including end-of-civilization-as-we-know-it scenarios.

On the sunnier side of Delirium Street, a recent article in the prominent technology magazine Wired discussed (with a straight face) the possibility of creating an artificially intelligent replication of a soon-to-be dearly departed relative. Some very large technology companies who have invested vast amounts in AI R&D have been creating ads that are not too far from this poignantly euphoric vision of the future.

On the AI hell-scape end of the spectrum, noted entrepreneur and futurist, Elon Musk, was quoted in a recent Vanity Fair article by Maureen Dowd warning that AI poses the “biggest existential threat” to humanity, a threat that could “produce something evil by accident,” such as “a fleet of artificial intelligence enhanced robots capable of destroying mankind.”

At their core, both of these views of the AI world are based on the same premise: computers can think and reason like humans. Words like cognitive, intelligence, thinking, and understanding pop up in advertisements all the time. If you’re willing to accept the premise that computers can actually think, it’s not a far leap to imagine AI robots replicating your dearly departed loved ones or taking over the world Hollywood sci-fi style.


Earth to Elon…

A more down to earth, if less exciting, view is that while computers are incredibly useful and can be applied to some interesting and important problems, the idea that they can think like us is demonstrably false and not based in science but rather in the creative or maybe hyperactive imaginations of marketing departments and advertising firms. Is it ever possible that we could create computers that can actually think? Who knows? Almost anything is theoretically possible. There is a name for that kind of interesting and occasionally useful speculation: science fiction. But it’s a lot easier to get your attention with sensational science-fiction narratives than with mundane reality and the people trying to sell you a lot of expensive stuff know that very well.

The brain functions in a fundamentally different way than a computer, as University of Sussex cognitive and computational neuroscientist, Dr. Anil Seth, tells us. Our consciousness, our perception of the world and ourselves in it, is based more on internal signals and predictions about the outside world than on the actual external input we receive. Seth believes that the human brain contains its own built-in conception of the outside world and imposes it on our external perception, which he states constitutes what we determine to be reality. He goes on to say that, in effect, you are the star of your own movie based on your formulation of the outside world. That is to say, you’re not just reacting to external events and input, your brain is actually creating them in your mind. It turns out that perception really is reality.

On the surface, this formulation of consciousness may seem similar to how a computer functions. But the computer operates at a vastly simpler level. A computer trying to identify something or respond to external stimulus compares it to a library of various data that it has been provided and tries to match the internal to the external inputs and select an appropriate response. But significantly, it doesn’t start from a vast and continually growing baseline of perceptual experience that we as humans have from the day we are born.

A human brain is believed to have up to 100 billion neurons, each of which is its own complex system working together to form your perception of the world. According to Dr. Seth, if you tried to count the number of possible connections that these neurons can make within your brain, counting one per second would take 3 million years. While the work in AI in object recognition, robotic motion, and text analysis is certainly interesting and worthy of attention, it shouldn’t at present be confused with the operations of human biological systems that perform far more complex actions simultaneously and seamlessly in ways that science is just beginning to understand.


Artifice or intelligence?

Let’s look at a common example of AI that generates a lot of hyperbole of both the utopian and apocalyptic flavors: replicating speech. It’s useful because it ties together several concepts about understanding and cognition that are key to what we think of as intelligence. And it’s unique to us as humans.

Infants learn language in every culture we’ve ever observed based on very limited data. Influential linguist and MIT professor emeritus, Noam Chomsky, describes this as a “poverty of stimulus.” With relatively limited examples, children are eventually able to communicate ideas and thoughts far beyond what they ever actually heard from the people around them. Many scientists now view language as an internal physiological system that is activated at birth rather than just a learned response triggered by external stimulus. This is more akin to the visual system or our sense of smell. Humans are capable of detecting 1 trillion different scents, according to physiologist, Dr. Jennifer Pluznick of Johns Hopkins School of Medicine. None of us has been exposed to anywhere near that number. The idea is that, like our sense of smell, language is part of how we are built – it’s more like what makes an acorn turn into a tree or a caterpillar turn into a butterfly. Computers, by contrast, receive external input and respond with a pre-defined set of responses provided by human beings in the form of a program—which is nothing more than a set of instructions, however complex they may be. Without doubt, programming has become far more sophisticated and appears more flexible and able to modify and enhance responses based on analyzing information. While this may appear to be closer to human learning, on closer reflection, it still has a long way to go.

Leading neuroscientists, biologists, linguists and physiologists readily acknowledge our limited understanding of how complex brain systems actually function and interact with each other. Until these brain systems are understood, it is more science-fiction than science to speculate that we will be able to replicate these communication and cognitive functions by reducing them to the 1’s and 0’s of a computer program.


A robot lawyer walks into the bar…

The potential role of AI in practicing law occupies a unique position. A world of robot lawyers replacing the existing human cohort is probably an apocalyptic nightmare to most members of the bar but may amount to utopian bliss for some other members of society. Ironically, given the risk-averse nature of the legal profession, the work being done analyzing legal issues is one of the more realistic and substantive applications of AI in the business world. Tremendous advances are being made helping lawyers become more efficient in solving practical legal problems. Some of these include analyzing complex contract provisions, answering specific questions in massive sets of discovery data and modeling and predicting outcomes for contemplated litigation.

Putting all the hype aside, the idea of a computer replicating your dearly beloved late relatives or taking over the world are at present about as likely as your mobile phone getting together with your Roomba and stealing your identity. Computers can and will continue to assist us in solving some very practical and difficult problems, legal and otherwise, though in far more mundane ways than the breathless sensationalist headline grabbing hyperbole that is currently in fashion.

As Albert Einstein pointed out in his 1938 book, the Evolution of Physics, how a scientific problem is formulated (i.e., asking the right question) is “often more essential” than the solution to the problem. Success in scientific endeavors comes from clearly defining, understanding and, typically, limiting the scope of the problem you’re trying to solve. This is demonstrated in the history of the major scientific advancements of the last 500 years from Galileo to Einstein and beyond.

If the AI problem is conceived as creating a computer that is intelligent and thinks the way we do, that is not realistic in the foreseeable future. A more reasonable goal is to continue to advance AI technology to solve specific real problems using the advantages that computers have with the special abilities that we as human beings share.