WASHINGTON, D.C., June 25, 2024 — Over the course of the spring, members of ARI’s Policy and Government Affairs teams participated in an AI Legislation Policy Sprint organized by the Federation of American Scientists (FAS). FAS issued a call for diverse, creative, innovative, and feasible AI policy proposals in the form of policy memos. FAS received proposals from experts in academia, think tanks, industry, and civil society, and ARI’s memo was one of fifteen selected for publication.
ARI’s memo, Message Incoming: Establish An AI Incident Reporting System written by David Robusto, Satya Thallam, Doug Calidas, and John Croxton, proposes a national system where developers, deployers, and stakeholders can voluntarily report AI incidents that have caused harm, or AI hazards which have not yet caused harm but are likely to do so.
See a preview of our memo below. Read the memo in its entirety here.
What if an artificial intelligence (AI) lab found their model had a novel dangerous capability? Or a susceptibility to manipulation? Or a security vulnerability? Would they tell the world, confidentially notify the government, or quietly patch it up before release? What if a whistleblower wanted to come forward – where would they go?
Congress has the opportunity to proactively establish a voluntary national AI Incident Reporting Hub (AIIRH) to identify and share information about AI system failures, accidents, security breaches, and other potentially hazardous incidents with the federal government. This reporting system would be managed by a designated federal agency—likely the National Institute of Standards and Technology (NIST). It would be modeled after successful incident reporting and info-sharing systems operated by the National Cybersecurity FFRDC (funded by the Cybersecurity and Infrastructure Security Agency (CISA)), the Federal Aviation Administration (FAA), and the Food and Drug Administration (FDA). This system would encourage reporting by allowing for confidentiality and guaranteeing only government agencies could access sensitive AI systems specifications.
AIIRH would provide a standardized and systematic way for companies, researchers, civil society, and the public to provide the federal government with key information on AI incidents, enabling analysis and response. It would also provide the public with some access to these data in a reliable way, due to its statutory mandate – albeit often with less granularity than the government will have access to. Nongovernmental and international organizations, including the Responsible AI Collaborative (RAIC) and the Organisation for Economic Co-operation and Development (OECD), already maintain incident reporting systems, cataloging incidents such as facial recognition systems identifying the wrong person for arrest and trading algorithms causing market dislocations. However, these two systems have a number of limitations in their scope and reliability that make them more suitable for public accountability than government use.
By establishing this system, Congress can enable better identification of critical AI risk areas before widespread harm occurs. This proposal would help both build public trust and, if implemented successfully, would help relevant agencies recognize emerging patterns and take preemptive actions through standards, guidance, notifications, or rulemaking.
###
About Americans for Responsible Innovation (ARI)
Americans for Responsible Innovation (ARI) is a nonprofit organization dedicated to policy advocacy in the public interest, focused on emerging technologies like artificial intelligence (AI). Learn more at www.ari.us.