Bloomberg Law
May 15, 2018, 7:31 PM UTC

What Congress’s First Steps Into AI Legislation Portend

Christopher Fonzone
Christopher Fonzone
Sidley Austin LLP
Kate Heinzelman
Kate Heinzelman
Sidley Austin LLP

Artificial intelligence is taking off. Investment in AI is skyrocketing; tech’s most powerful companies are betting it’s the next big thing; and nearly every day seems to bring a new report suggesting AI will have an enormous impact on the world’s economy over the next generation and beyond. We may not yet know how AI will affect the future, but we can be certain that it will. While AI is already subject to existing legal and regulatory regimes, the question advocates and skeptics alike are asking is whether and how legislators should regulate AIspecifically. Advocates have urged Capitol Hill to facilitate AI investment and help U.S. companies maintain technological leadership in the field by passing laws that provide targeted liability protection or preempt conflicting state laws. At the same time, skeptics including Tesla Inc. CEO Elon Musk have expressed grave concerns about AI’s potential for disruption, asking regulators to step in and impose guardrails “before it’s too late.” While Congress has largely hung back, several AI-related bills have emerged that may signal what’s to come.

Although it’s too early to provide a definitive answer about how Congress will react, the past several months have offered the first real clues as to where lawmakers might be headed. During that period, legislators introduced several separate AI-related bills: the House-passed SELF DRIVE Act (H.R. 3388), which addresses the safety of automated vehicles; the AV START Act (S. 1885), which was introduced by a bipartisan group of senators also to tackle driverless cars; the FUTURE of AI Act (H.R. 4625), introduced by a bipartisan group of senators to create an advisory committee on AI issues; the AI JOBS Act (H.R. 4829), which calls for a Department of Labor report on the impact of AI on the workforce; and theNational Security Commission Artificial Intelligence Act of 2018 (H.R. 5356), which establishes a commission to review advances in AI with an eye toward promoting U.S. national security. These bills have yet to become law, but taken together they provide important insight into how Congress views AI and its role, if any (at this stage), in regulating it. Moreover, if enacted, early legislative pronouncements with respect to developing technologies tend to be “sticky” and to have consequences that not even the legislation’s advocates may realize or fully understand. That’s why it’s crucial for those with a stake in AI’s future to pay attention and consider what steps they should take in light of the potential for legislative action.

What Does Congress Have to Think About?

Congress faces a number of questions as it contemplates AI. Beyond the threshold question of whether it should regulate at all, it also must decide who it empowers and how it might do so. Should Congress use existing regulators or a new regulatory or advisory body? Should it focus on cross-cutting issues unique to AI technologies as a whole, or instead adopt a sector-by-sector approach (much like the approach it has taken to U.S. privacy law)? Or should Congress pursue a public-private partnership model designed to encourage self-regulation – an approach it favored in the cybersecurity arena when Congress passed legislation such as the Cybersecurity Information Sharing Act of 2015 (S. 754)? The regulatory pathway Congress chooses could reveal much about how it views AI, including whether it sees AI as an independent meaningful category or a new technology that should be addressed on a sector-specific basis.

Similarly, if Congress chooses to step in, it must also decide which aspects of AI warrant its attention. AI shares many legal or regulatory challenges with other technologies and the use of big data, but it also poses unique concerns. Certain themes thus repeatedly emerge as a short checklist of issues that might pique lawmakers’ interest, although it is impossible to predict with certainty all of the questions that will arise as technology develops:

* Threshold questions:

* Defining AI: What technologies should qualify as AI?

* Outer limits:Are there any decisions that AI systems should not be allowed to make? For example, should there be a prohibition on autonomous weapons systems that can employ force without human intervention? See, e.g., Autonomous Weapons: An Open Letter From AI & Robotics Researchers, Future of Life Institute (July 28, 2015) (arguing, in a letter signed by Stephen Hawking, Elon Musk, and Steve Wozniak, among others, that “[s]tarting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control”).

* The rights and responsibilities of robots: Will various standards that apply to human actions apply to robots? See, e.g., Jack Balkin, The Path of Robotics Law, 6 Cal. L. Rev. Circuit 45, 46 (2015). For example, how does mens rea in the criminal law apply to AI? How about foreseeability in torts?

* Safety: How can we be sure that AI systems are safe? Should our testing regimes for autonomous systems differ materially from testing regimes for non-autonomous systems? What kind of liability regimes (e.g., tort-based, joint and several liability, etc.) should apply to injuries caused by these systems?

* Security: Given AI systems’ autonomous capabilities, should they be subject to specific cybersecurity rules? If so, should these rules focus on disclosure of potential risks and ensuring processes are in place for self-monitoring, or should regulators play a more prescriptive role by imposing minimum standards?

* Privacy: How, if at all, should the use of data in machine learning systems affect people’s ability to control their personal information? Should AI systems be subject to more stringent requirements than other technologies to disclose what personal information they use and/or retain?

* Discrimination: Even if AI systems are not designed to discriminate improperly, or perpetuate bias, they may end up doing so in practice. What should be done to guard against this? See, e.g., Alex Campolo, et al., The AI Now 2017 Report, at 13-21.

* Due process:

* Minimum standards: Should there be there be minimum standards for how AI systems make decisions? For how they must be tested before they are released and used? For how the results they generate must be monitored?

* Transparency: Should AI systems be subject to greater transparency requirements than traditional technologies? Must humans be told when decisions that affect them are being made by robots? Should we require AI systems to explain why they did what they did or on what information their actions are based?

* Economic implications:

* Competition: Will Congress take specific steps to ensure that the AI field is competitive?

* Workforce: There is general agreement that AI could have a dramatic impact on various sectors of the workforce. What, if any, steps should Congress take to address these structural employment changes, e.g., training programs, etc.?

* Intellectual property: Are specific IP rules needed for AI systems, both to protect IP in those systems and of works they in turn create?

What Has Congress Been Up To?

It is against the backdrop of these debates that members of Congress have recently considered several pieces of AI-related legislation:

The SELF DRIVE Act. H.R. 3388 was introduced in the House in July and passed by voice vote in early September. The Senate has referred the bill to the Committee on Commerce, Science, and Transportation but taken no further action.

The SELF DRIVE Act’s stated purpose is to “memorialize the Federal role in ensuring the safety of highly automated vehicles as it relates to design, construction, and performance, by encouraging the testing and deployment of such vehicles.” Consistent with this goal, the act, among other things, explicitly preempts state laws “regarding the design, construction, or performance of highly automated vehicles, automated driving systems, or components of automated driving systems.” The act also requires the secretary of transportation to take a number of steps with respect to automated vehicle safety, including completing a rulemaking requiring manufacturers of automated vehicles to submit information on how they are addressing safety, and further directs the secretary to establish a “Highly Automated Vehicle Advisory Council” within the National Highway Traffic Safety Administration. The act also places obligations on manufacturers, requiring them to develop cybersecurity and privacy plans before selling certain vehicles that contain automated technology.

The AV START Act. Senators John Thune (R-S.D.) and Gary Peters (D-Mich.) introduced the act, which was reported by the Committee on Commerce, Science, and Transportation in November 2017. Its purpose is to “encourage a gradual introduction of [highly automated vehicle (HAV)] technology in a way that would promote public safety and build public confidence and trust in the technology, while at the same time, avoiding unreasonable restrictions on the introduction of the technology into interstate commerce.” S. Rep. No. 115-187 (Nov. 28, 2017).

To that end, the act is similar to the SELF DRIVE Act in certain respects. Among other things, it preempts state and local law “regulating the design, construction or performance” of HAVs or automated driving systems with respect to specified safety topics while federal standards are formulated. Like the SELF DRIVE Act, the AV START Act also directs the Department of Transportation to take a number of safety-related steps, such as establishing a committee to make technical recommendations regarding the safety of autonomous vehicles, with those recommendations potentially forming the basis for rulemaking. The AV START Act also specifically calls on the DOT to revise existing standards to account for differences between automated and human systems. Much like the SELF DRIVE Act, the AV START Act provides that the secretary may grant a specified but increasing number of vehicles exemptions to specified safety standards each year. Obligations also fall on manufacturers, who are required to submit safety reports to the DOT and (again mirroring the SELF DRIVE Act) develop, maintain, and execute written plans for identifying and reducing cybersecurity risks to their systems. The act also specifically directs the DOT to establish a committee composed of representatives from federal, state, and local governments, as well as industry stakeholders, to make recommendations to Congress within two years regarding ownership and control of information that HAVs generate, collect, and store, with departments and agencies prohibited from issuing rules on these issues during the report’s pendency.

The FUTURE of AI Act. The act was introduced in both the House and Senate in December. Neither chamber has taken action on the bills besides referring them to committee, but both versions have a bipartisan set of co-sponsors.

Unlike the two bills discussed previously, the FUTURE of AI Act does not focus on a particular application of AItechnology. Rather, the bill is motivated by the view (as a sense of Congress at the outset states) that “understanding and preparing for the ongoing development of artificial intelligence is critical to the economic prosperity and social stability of the United States.” Consistent with this view, the bill directs the secretary of commerce to establish a federal advisory committee made up of a broad cross-section of AI stakeholders from business, the academic and research community, civil society, and labor. The act directs the committee to study and report to the secretary and Congress on a range of AI related topics, and specifically identifies as the committee’s priorities the development of guidance or recommendations on how to: “promote a climate of investment and innovation to ensure the global competitiveness of the United States”; “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce that results from the development of artificial intelligence”; “promote and support the unbiased development and application of artificial intelligence”; and “protect the privacy rights of individuals.”

The AI JOBS Act of 2018. The act, introduced by bipartisan members of Congress in January, expresses the sense of Congress that technology can have positive impacts but may also disrupt the workforce. It calls for the secretary of labor to prepare a report on AI’s impact on the workforce.

The National Security Commission Artificial Intelligence Act of 2018. Similar to the AI JOBS Act, this act, introduced in the House in March, would establish a commission to review AI advances to “comprehensively address the national security needs of the Nation, including economic risk, and any other needs of the Department of Defense or the common defense of the Nation.” The commission’s mandate is broader than the bill’s title may immediately suggest. It would review, among other things, ways for the U.S. to “maintain a technological advantage” in AI, approaches “to foster greater emphasis and investments in basic and advanced research” in these areas generally, and “means to establish data standards and provide incentives for the sharing of open training data within related data-driven industries.”

What These Bills Tell Us

The bills outlined provide initial insights into how Congress is thinking about AI and attempting to strike an appropriate balance between, on one hand, a desire to encourage its economic development and, on the other, the potential implications of AI on U.S. “social stability” and the U.S. workforce. Three observations stand out.

First, although all five bills recognize the disruptions AI could potentially produce, the bills largely take a let’s-study-it-further approach that demonstrates a sensitivity to the rapidly changing nature of the technology and to the risk that premature regulation could stifle innovation. All five bills call for study and reporting on AI-related issues, with the prospect of federal regulation pushed until after initial studies are complete. Thus, the sort of prescriptive regulation favored by AI skeptics and seen recently in the European Union’s General Data Protection Regulation guidelines appears to be largely off the table.

Second, consistent with this take-it-slow approach, Congress appears to be focusing less on making broad pronouncements on AI generally and more on addressing sector-specific questions as they arise. To be sure, the FUTURE of AI Act would establish a generalist commission tasked with understanding and preparing for AIdevelopments, and the AI JOBS Act would study the workforce generally. But the more detailed, prescriptive bills focus exclusively on the sector where the technology has significantly advanced and where the states are more involved: automated vehicle technology. These bills are not only further along in the legislative process, but they also feature the statutory provisions with the most immediate impact: the preemption of certain state laws to ensure that the path is clear for innovation without the complications caused by disparate state regulatory regimes.

Third, although the bills reference many AI-related issues, which makes it especially difficult to discern at this point which issues will be especially salient, Congress appears to be particularly interested in cybersecurity and data privacy issues. Both the SELF DRIVE and AV START acts require companies to put cybersecurity plans in place and contain provisions addressing consumer privacy. The FUTURE of AI Act identifies the protection of individual privacy rights as a priority AI issue. These bills appear to anticipate that the importance of managing and protecting information will continue to grow as increasingly large data sets are used to power AI.

What Does the Future Hold?

It’s difficult to make predictions. This is all the more true when there is little to go on, as with Congress’s views on AI. But the introduction of the bills – and legislative efforts at the state level – shows, if nothing else, that legislators are seized of the many issues presented by the rapid development of AI.

To be sure, these bills provide no definitive proof of future legislative activity. Consider how many times it has appeared as though Congress was going to enact a nationwide breach notification law. Nonetheless, the time for companies with a stake in the future of AI to get in on the debate is now. Think about how legislation passed much earlier in the computer age continues to have unforeseen impacts today. Consider the 1986 Electronic Communications Privacy Act, with its warrant requirement only for emails fewer than 181 days old, and the liability protections of Section 230 of Communications Decency Act, enacted in 1996.

These legislative choices, and many more, demonstrate that, even if companies don’t think legislation or regulation is necessary or wise, they should be prepared to contribute to the ongoing debate (and perhaps to the committees that may be formed by the bills discussed here) to advocate for their preferred way forward. Among other things, engagement with legislators and regulators provides an opportunity for companies to showcase their efforts to self-regulate. As we have already seen, industry best practices, model disclosure and cybersecurity policies, and industry-specific model codes of conduct can play an influential role as regulators look for successful models on which to build. By the same token, the AI community can play an important role in educating legislators on AI itself, helping them better understand, meaningfully describe, and distinguish between technologies that should be regulated differently. If the private sector fails to engage, there’s a risk that regulatory frameworks that may endure and shape future legislation will be developed without them.

Christopher Fonzone is a partner in the privacy and cybersecurity group at Sidley Austin. He served as deputy assistant to President Barack Obama, deputy White House counsel, and legal adviser to the National Security Council.

Kate Heinzelman is a member of the privacy and cybersecurity group at Sidley Austin. She served as special assistant to President Obama, associate White House counsel, and clerked for Chief Justice John Roberts.

This article has been prepared for informational purposes only and does not constitute legal advice. This information is not intended to create, and the receipt of it does not constitute, a lawyer-client relationship. Readers should not act upon this without seeking advice from professional advisers. The content therein does not reflect the views of the firm.

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.