Billable Hour ‘Makes No Sense’ in an AI World

Photographer: Luke MacGregor/Bloomberg

Panelists see a future where lawyer’s failure to use AI will be ethical lapse, malpractice liability

Lawyers will be increasingly valued for judgment, creativity


 
Artificial intelligence (AI) is transforming the practice of law, and “data is the new oil” of the legal industry, panelist Dennis Garcia said at a recent American Bar Association conference.Garcia is an assistant general counsel for Microsoft in Chicago. Robert Ambrogi, a Massachusetts lawyer and blogger who focuses on media, technology, and employment law, moderated the program.“The next generation of lawyers is going to have to understand how AI works” as part of the duty of competence, panelist Anthony E. Davis told the audience. Davis is a partner with Hinshaw & Culbertson LLP in New York.

Davis showed the audience a slide quoting Andrew Ng, a computer scientist and professor at Stanford University: “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.” AI can “automate expertise,” Davis said. Because software marketed by information and technology companies is increasingly making it unnecessary to ask a lawyer for information regarding statutes, regulations, and requirements, “clients are not going to pay for time,” he said. Instead, he predicted, they will pay for a lawyer’s “judgment, empathy, creativity, adaptability, and emotional intelligence.”

Davis said AI will result in dramatic changes in law firms’ hiring and billing, among other things. The hourly billing model, he said, “makes no sense in a universe where what clients want is judgment.” Law firms should begin to concern themselves not with the degrees or law schools attended by candidates for employment but with whether they are “capable of developing judgment, have good emotional intelligence, and have a technology background so they can be useful” for long enough to make hiring them worthwhile, he said.

“The limitation on lawyers partnering with nonlawyers is basically a recipe for doom for the law firm,” Davis added.

Performance Enhancement

The panelists provided examples of how the use of artificial intelligence can enhance lawyers’ efficiency:

  • Legal research. That artificial intelligence has made legal research easier, cheaper, and more efficient isn’t news, Ambrogi said. But AI in legal research now empowers lawyers to upload briefs or other documents for analysis to obtain much more specific search results. “Some [services] can give you not just cases but the answer to your question,” he said. Davis added that one AI company “is giving away its software to every federal judge.” That software, he said, will analyze briefs and notify judges and law clerks of “what cases were missed and should have been cited.”
  • Document review in e-discovery. Although lawyers may think they’re the “gold standard” for reviewing documents, technology can actually do a better job, Ambrogi said.
  • Drafting and evaluating contracts. Davis said a major U.S. bank developed software to evaluate thousands of contracts “to see if they were identical and if they got it right.” The program took two days to provide results, and the bank “estimated they saved 360,000 hours of lawyers’ time.”
  • Evaluating lateral hires. Panelist Kyle Doviken, a lawyer who works for Lex Machina in Austin, Texas, said many firms are hiring more lawyers with six or seven years’ experience than first-year associates fresh out of law school. One large international law firm, he said, uses his company’s data to evaluate every candidate considered for lateral hire. “Did a candidate just enter an appearance in a case or actually try it?” He also noted that in-house counsel are using big data to screen litigation counsel who are competing for their company’s business.
  • Assess propensities of federal judges. “We have data for every federal district court judge or magistrate” regarding, for example, their rates of granting motions for summary judgment, Doviken said. Lawyers can review the data and decide “in seconds” whether a motion might be fruitful or a needless expenditure of client resources, he said. Every client in every matter, Doviken said, “asks how much is it going to cost, how long is it going to take, and are we going to win or lose.” Using big data, lawyers can answer each of those questions with a “very high degree of likelihood,” he said. In fact, one judge whom Doviken’s known for years “said he wished everyone had this data, because then so many motions would never get filed.”

Judicial Bias?

A partner at a large firm, Doviken said, had a “hunch” that a certain judge’s rulings favored alumni of the judge’s law school. After reviewing three years’ worth of data, the firm concluded the hunch was valid, assigned a graduate of that law school to a matter pending before that judge, and started winning its motions. Although Doviken hastened to say that one anecdote proved nothing, he wondered whether the firm has gained any new ethical duties as a result of that data review. Should the firm confront the judge or file a report with a judicial conduct agency, he asked? Does the firm now owe a duty to any client with a case pending before that judge to staff the matter with at least one alumnus of that law school?

An audience member asked whether a lawyer might be required to use legal analytics to satisfy the ethical duty of competence. [See Model Rule 1.1.] Davis responded that The T.J. Hooper v. Northern Barge Corp., S.D.N.Y., 10/15/31, is “directly relevant to lawyers and law firms” with respect to AI. In that case, a shipping company was found liable for a boat accident that might have been avoided had a reliable radio—then new technology—been on board. Going forward, lawyers who don’t use AI software—or use the wrong software—and get a bad result for their clients will be found liable in the resulting legal malpractice cases, he posited. “Future lawyers are going to have a really deep understanding of [AI] products and how they work” so their firms can get the results their clients want, he said.

The panel, entitled “Ethics Issues in Lawyers’ Use of Artificial Intelligence,” took place May 31 at the 44th ABA National Conference on Professional Responsibility in Louisville, Ky.

To contact the reporter on this story: Helen Gunnarsson at helen.gunnarsson@americanbar.org

To contact the editor responsible for this story: S. Ethan Bowers at sbowers@bloomberglaw.com