Bipartisan bill aims to manage the risks of advanced artificial intelligence
On Monday, Sen. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) introduced the Artificial Intelligence Risk Evaluation Act, a bill designed to mitigate and manage the risks of advanced artificial intelligence with federal oversight. The bill requires advanced AI developers to submit systems for testing for weaponization and loss-of-control risks prior to deployment, with penalties for non-compliance, and tasks DOE with generating empirical data to guide federal oversight, including potential regulatory frameworks for AI and artificial superintelligence.
“Congress has spent a lot of time over the last year debating whether to do away with regulations for the AI industry – this bill is a welcome show of bipartisan support for creating rules of the road to protect the public. If the innovators in Silicon Valley are right about what they’re building, and AI has the capacity to supersede human intelligence, we need Congress to get serious about federal oversight that safeguards our national security, our families, and our workforce,” said ARI President Brad Carson. “Sens. Hawley and Blumenthal’s new bill moves the debate forward with a serious attempt at creating transparency, accountability, and guardrails for AI developers at the highest level.”
Under the AI Risk Evaluation Act, developers of advanced AI would be required to submit information to the Department of Energy and could not deploy new models until complying with its requirements. The legislation would generate empirical data to inform future regulation and require the Secretary of Energy to report annually to Congress with recommendations for addressing AI risks.
###
Americans for Responsible Innovation (ARI) is a nonprofit organization dedicated to policy advocacy in the public interest, focused on emerging technologies like artificial intelligence (AI). Learn more at ARI.us.