Big Law firms are gearing up for potential discrimination lawsuits, resulting from inherent bias in artificial intelligence tools used by employers. Since March, both Paul Hastings and DLA Piper have launched practices focused on artificial intelligence, while other firms like Littler Mendelson, Fisher Phillips, and Proskauer Rose have partners well-versed in the developing technology and its risks.
A lack of federal policy means that lawyers are proactively informing employers of the risks of AI tools, considering workers have already filed charges with the Equal Employment Opportunity Commission.
Companies have turned to AI tools to headhunt potential applicants, sort desirable employee qualities in resumes, or monitor when a worker might be thinking of flying the coop, lawyers told Bloomberg Law. Most attorneys are advising clients to screen new technology carefully before deploying it, and to avoid fully automated tests that could disparately affect employees.
Yet, very little has been done about workplace AI challenges on a federal level.
In September 2018, Sens.
Federal lawmakers also haven’t taken a stab at any sort of policy to acknowledge the potential impacts of the spreading technology, and key members have been resistant to drafting legislation. In recent months, some members of Congress have begun paying closer attention to the potential risks associated with artificial intelligence in the workplace, but members were resistant to committing to drafting legislation to regulate the field based on what they find.
With both Congress and the EEOC dragging their feet, that leaves the responsibility, at least for now, with lawyers representing clients who want to use the technology.
How It Works, and Doesn’t
Artificial intelligence takes many forms in shaping the employment process, appearing everywhere from the resume screening process to tests that measure cognitive ability, Paul Hastings partner
One example of inherent bias is resume screening, where Willner said he commonly sees missteps.
“Often times, employers will try to model their AI resume screening tools in order to replicate their previous human-driven resume screening,” he said.
Willner described one client whose tool searched for candidates by filtering for the words “black” and “Africa,” because those words had yielded ideal candidates in the past.
“Those are words one would not want to use in resume screening,” he said.
While that’s a blatant example, others are more nuanced, like analogy testing, a common cognitive measure used in the interview process.
“You can get different answers, different associations, from people with different backgrounds,” resulting in an impact on certain protected classes of people, he said.
Some employers are also willing to take a risk to reap the benefits the tools can provide a business, Paul Hastings partner
“There is always a risk of adverse impact,” she said. “They can only do what they can do to minimize it.”
AI-based discrimination claims haven’t made their way to the courts yet beyond the Facebook lawsuits, but workers have filed charges with the EEOC, Willner said.
The pending charges at the EEOC have to do with selection tests, which use AI to choose questions a candidate must answer, and then rely on AI to score those questions, which could be inherently biased, he said.
Clients are expressing interest in learning more about the technology, to safeguard themselves from liability, Fisher Phillips partner Randy Coffey said.
“If you don’t know the basis of the algorithms generating decisions, you are making yourself at risk if it generates” an unfair impact, he said.
Sullivan advises clients to look through the often-lofty promises sellers of the emerging AI tools purport. The intentions of widening candidate pools are noble, but not all tools are made alike.
“I would say we’re watching everything,” she said. “The technologies are ever-changing.”
Back in 2016, the EEOC did hold an educational meeting about AI, though no policies evolved from it. Littler Mendelson shareholder
“Trying to create a brand new, comprehensive answer, before you know what the question is, is always a challenge,” he said.
To contact the editor responsible for this story: