Helen Toner worries ‘not super functional’ Congress will flub AI policy
Helen Toner, a former OpenAI board member and the director of strategy at Georgetownās Center for Security and Emerging Technology, is worried Congress might react in a āknee-jerkā way where it concerns AI policymaking, should the status quo not change.
āCongress right now ā I donāt know if anyoneās noticed ā is not super functional, not super good at passing laws, unless thereās a massive crisis,ā Toner said at TechCrunchās StrictlyVC event in Washington, D.C. on Tuesday āAI is going to be a big, powerful technology ā something will go wrong at some point. And if the only laws that weāre getting are being made in a knee-jerk way, in reaction to a big crisis, is that going to be productive?ā
Tonerās comments, which come ahead of a White House-sponsored summit Thursday on the ways in which AI is being used to support American innovation, highlight the longstanding gridlock in U.S. AI policy.
In 2023, President Joe Biden signed an executive order that implemented certain consumer protections regarding AI and required that developers of AI systems share safety test results with relevant government agencies. Earlier that same year, the National Institute of Standards and Technology, which establishes federal technology standards, published a roadmap for identifying and mitigating the emerging risks of AI.
But Congress has yet to pass legislation on AI ā or even propose any law as comprehensive as regulations like the EUās recently enacted AI Act. And with 2024 a major election year, itās unlikely that will change any time soon.
As a report from the Brookings Institute notes, the vacuum in federal rulemaking has led to a rush to fill the gap by state and local governments. In 2023, state legislators introduced over 440% more AI-related bills than in 2022; close to 400 new state-level AI laws have been proposed in recent months, according to the lobbying group TechNet.
Lawmakers in California last month advanced roughly 30 new bills on AI aimed at protecting consumers and jobs. Colorado recently approved a measure that requires AI companies to use āreasonable careā while developing the tech to avoid discrimination. And in March, Tennessee governor Bill Lee signed into law the ELVIS Act, which prohibits AI cloning of musiciansā voices or likenesses without their explicit consent.
The patchwork of rules threatens to foster uncertainty for industry and consumers alike.
Consider this example: in many state laws regulating AI, āautomated decision makingā ā a term broadly referring to AI algorithms making some sort of decision, like whether a business receives a loan ā is defined differently. Some laws donāt consider decisions āautomatedā so long as theyāre made with some level of human involvement. Others are more strict.
Toner thinks that even a high-level federal mandate would be preferable to the current state of affairs.
āSome of the smarter and more thoughtful actors that Iāve seen in this space are trying to say, OK, what are the pretty light-touch ā pretty common-sense ā guardrails we can put in place now to make future crises ā future big problems ā likely less severe, and basically make it less likely that you end up with the need for some kind of rapid and poorly-thought-through response later,ā she said.