#Tech news

Helen Toner worries ‘not super functional’ Congress will flub AI policy


Helen Toner, a former OpenAI board member and the director of strategy at Georgetownā€™s Center for Security and Emerging Technology, is worried Congress might react in a ā€œknee-jerkā€ way where it concerns AI policymaking, should the status quo not change.

ā€œCongress right now ā€” I donā€™t know if anyoneā€™s noticed ā€” is not super functional, not super good at passing laws, unless thereā€™s a massive crisis,ā€ Toner said at TechCrunchā€™s StrictlyVC event in Washington, D.C. on Tuesday ā€œAI is going to be a big, powerful technology ā€” something will go wrong at some point. And if the only laws that weā€™re getting are being made in a knee-jerk way, in reaction to a big crisis, is that going to be productive?ā€

Tonerā€™s comments, which come ahead of a White House-sponsored summit Thursday on the ways in which AI is being used to support American innovation, highlight the longstanding gridlock in U.S. AI policy.

In 2023, President Joe Biden signed an executive order that implemented certain consumer protections regarding AI and required that developers of AI systems share safety test results with relevant government agencies. Earlier that same year, the National Institute of Standards and Technology, which establishes federal technology standards, published a roadmap for identifying and mitigating the emerging risks of AI.

But Congress has yet to pass legislation on AI ā€” or even propose any law as comprehensive as regulations like the EUā€™s recently enacted AI Act. And with 2024 a major election year, itā€™s unlikely that will change any time soon.

As a report from the Brookings Institute notes, the vacuum in federal rulemaking has led to a rush to fill the gap by state and local governments. In 2023, state legislators introduced over 440% more AI-related bills than in 2022; close to 400 new state-level AI laws have been proposed in recent months, according to the lobbying group TechNet.

Lawmakers in California last month advanced roughly 30 new bills on AI aimed at protecting consumers and jobs. Colorado recently approved a measure that requires AI companies to use ā€œreasonable careā€ while developing the tech to avoid discrimination. And in March, Tennessee governor Bill Lee signed into law the ELVIS Act, which prohibits AI cloning of musiciansā€™ voices or likenesses without their explicit consent.

The patchwork of rules threatens to foster uncertainty for industry and consumers alike.

Consider this example: in many state laws regulating AI, ā€œautomated decision makingā€ ā€” a term broadly referring to AI algorithms making some sort of decision, like whether a business receives a loan ā€” is defined differently. Some laws donā€™t consider decisions ā€œautomatedā€ so long as theyā€™re made with some level of human involvement. Others are more strict.

Toner thinks that even a high-level federal mandate would be preferable to the current state of affairs.

ā€œSome of the smarter and more thoughtful actors that Iā€™ve seen in this space are trying to say, OK, what are the pretty light-touch ā€” pretty common-sense ā€” guardrails we can put in place now to make future crises ā€” future big problems ā€” likely less severe, and basically make it less likely that you end up with the need for some kind of rapid and poorly-thought-through response later,ā€ she said.



Source link

Leave a comment

Your email address will not be published. Required fields are marked *