Imagine if technology powered by artificial intelligence (AI) could help visually impaired people see? Such technology actually exists in the form of a smartphone app called Seeing AI that literally serves as a talking camera that helps visually impaired people see by describing their surroundings at any given moment and can improve the quality of life for millions of people.
The Seeing AI app is an incredible example of the power of AI to transform our society and as AI becomes a greater part of our everyday lives, our legal profession has an important role to play in shaping these important legal and ethical principles associated with AI:
Privacy & Security
Massive amounts of data are required to fuel AI and to train the algorithms that are part of AI solutions. As the data privacy laws across the globe continue to evolve (e.g., the European Union General Data Protection Regulation that becomes effective on May 25, 2018), we continue to see significant data loss/access issues involving well-known institutions and the cybercriminals become even more sophisticated, it is of paramount importance that AI systems need to respect privacy and be highly secure.
AI solutions ought to widely benefit everyone – not just a select few. So that AI systems can empower all of us, they need to embrace inclusive design practices that can identify potential blockers and gaps which could have the unintended impact to exclude groups of people. For instance, building AI technologies with an inclusive-mindset can enhance accessibility for the over 1 billion people across the globe with disabilities.
AI systems should treat all people in a fair and equitable manner. As an example, when AI technologies diagnose medical treatment for patients, everyone with similar symptoms should receive the same guidance from their doctors. However, since AI systems are designed by human beings who have inherent biases, they are susceptible to operating unfairly. While more work needs to be done to help ensure that AI systems promote fairness in their outcomes, it is critical that the individuals designing AI systems reflect the broad diversity of the world we live in.
We have all seen science-fiction movies which portray “robots” as being evil and harming others. Nowadays companies will only use technology they can truly trust so AI systems must be operated in a safe and reliable manner. Safety and reliability of AI solutions can be promoted via embracing AI design and delivery best practices such as the following: proper evaluation of the data quality used to train AI offerings; appropriate verification processes; identifying when human input is warranted and control of an AI system should be transferred to a human; and establishing suitable feedback mechanisms.
True transparency is a fundamental principle for establishing trust. AI solutions must be clear and understandable – especially when they influence the decision-making process and impact the lives of people. While it probably is not sufficient alone to identify the various algorithms that are part of AI systems, the technology industry will need to work together to establish an approach that describes the key pieces of an AI system in a straightforward and easy to understand fashion.
Very recently a self-driving car unfortunately struck and killed a woman in Arizona. As AI solutions are increasingly embraced across all industry sectors, who is responsible when liability arises and how is that liability appropriately allocated when multiple parties are involved? The accountability issue becomes even more acute as AI is deployed for higher-risk use applications that could result in a greater likelihood of bodily injury or death.
Some fear that “robots” may replace lawyers while others (including me) believe that AI is a tool that will augment the quality, speed and delivery of legal solutions to our clients. As AI solutions become more widely adopted in the legal profession, we will need local bar organizations to partner with the growing legal technology community and provide thoughtful and practical guidance regarding the legal ethics associated with lawyers using AI systems. The key principles described above of privacy and security, inclusiveness, fairness, reliability, transparency and accountability provide a blueprint for the legal community to help build a framework for lawyers to use AI technology in an ethical and responsible manner.
As we are still in the early stages in the growth of AI solutions, our legal profession has a golden opportunity to positively impact the future of AI for our society. Let’s be sure to seize that opportunity.
Dennis Garcia is an Assistant General Counsel for Microsoft Corp. based in Chicago. He practices at the intersection of law, business and technology and provides a wide range of legal support to Microsoft’s Sales, Marketing and Operations teams across the U.S. Dennis received his B.A. in Political Science from Binghamton University and his J.D. from Columbia Law School. He is admitted to practice in New York, Connecticut, and Illinois (House Counsel).