What's Driving AI Law in America?
Toby Ord’s recent book, The Precipice, makes a compelling case that humanity needs to collectively focus on reducing existential risks to ensure our long-term continued existence as a species. He identifies anthropogenic risks, specifically those driven by technology, to be the most pressing risks to tackle over the next century. Ord believes the rise of uncontrolled AI is this century’s most significant risk to the future of humanity. As fantastical as this possibility may seem to many, invoking pop-culture ideas such as the Terminator or Hal 9000, he bases this estimate on a wide range of interviews with leading AI researchers, many of whom take the possibility deeply seriously.
Whether you believe that AI poses a serious threat or not (this blogpost builds a persuasive case for taking it seriously), its increasing use by businesses and governments has driven rising public consciousness of the necessity to impose regulations to avoid possible unintended consequences.
Federal legislation has thus far been limited to a failed bill originally proposed in 2019 called the Algorithmic Accountability Act, introduced by Senator Ron Wyden, which would have had the FTC mandate companies conduct assessments of their AI systems for their “impacts on accuracy, fairness, bias, discrimination, privacy, and security.”
The lack of comprehensive legislation at the federal level has meant that AI restrictions have primarily been in the form of regulatory guidance by various executive branch agencies such as a November 2020 memo from the Office of Management and Budget (OMB). This final guidance to agencies is in line with previous American regulatory approaches to controversial technology: avoiding unnecessary regulation in order to promote growth and in this case maintain “continued status as a global leader in AI development.” To this end it advises avoiding an overly cautious approach that would “harm innovation.” It mandated that all agencies submit plans to make sure they are compliant with the standards by May 2021.
At the state level, according to the National Conference of State Legislatures, 17 states introduced legislation governing artificial intelligence and four states enacted legislation. Of the bills passed across these four states, two were intended to prevent discrimination (by insurers and by employers during hiring processes), one established an advisory body on AI, and the last incorporated AI into the educational curriculum.
According to the federal OMB guidance, regulatory bodies must consider their actions’ impacts on state laws in order to “address inconsistent, burdensome, and duplicative State laws that prevent the emergence of a national market.” The desire to maintain national dominance seems to be driving American AI policy, both in preventing federal regulations from being overly cumbersome to innovation and also in preventing state laws from breaking up the national market.
While a permissive regulatory approach appears to be relatively dominant, there have been some regulatory agency actions to impose some limitations. In April this year, the FTC released updated guidelines (similar to previous guidelines from a year prior) to ensure AI is not used in ways that are discriminatory against legally protected classes, particularly racial minorities. Their guidelines emphasize that existing legal protections such as Section 5 of the FTC Act, the Fair Credit Reporting Act, and the Equal Credit Opportunity Act, can and will be used to ensure companies are employing AI in their business practices in a nondiscriminatory manner.
However, these guidelines are ultimately a simple warning to companies, rather than a new set of stricter standards or procedures to ensure AI systems don’t have unintended consequences. By not requiring impact assessments, such as those proposed in the Algorithmic Accountability Act referenced earlier, it falls to companies to have the patience and integrity to ensure their systems are well designed to avoid producing biased outcomes.
The future U.S. approach to AI seems destined to become more hands-on as AI usage becomes widespread across many sectors of the economy. However, fears of being beaten out by China in the race to develop AI for both military and commercial purposes are likely to continue to limit the extent to which AI legislation will constrain any development, even as researchers emphasize the danger of engaging in an AI arms race.