Skip to content

Our Priorities

To advance the cause of responsible innovation, ARI is advocating for thoughtful, targeted policies that address the AI issue from multiple angles.

Cross-Cutting Policy Recommendations

In addition to policies that address specific harms and risks, ARI recommends the following measures that cut across issue areas and improve the U.S. government’s ability to lead on AI.

RECOMMENDATIONS:
  • Strengthen and codify the Commerce Department’s Center for AI Standards and Innovation (CAISI).
  • Oppose pre-emption of state-level AI regulations until and unless strong federal guardrails provide adequate safeguards.
  • Establish “Right to Warn” whistleblower protections for employees developing frontier AI systems.
  • Democratize access to AI for academic researchers by generously funding the National AI Research Resource (NAIRR) and federal testbeds..
  • Retain, attract, and develop technical talent within government to strengthen capacity for navigating the rapidly developing AI policy landscape.

Current Harms

Government is falling behind on addressing the current harms coming from AI systems, including algorithmic bias, electoral interference, and labor market disruption.

RECOMMENDATIONS:
  • Promote public trust by requiring disclosure when automated systems (e.g. chatbots) impersonate humans.
  • Support existing federal agencies in their oversight of AI deployments in regulated use cases under their jurisdiction.
  • Promote high-quality and timely data to monitor and plan for AI’s impact on labor markets.
  • Allow content creators to quickly and easily opt out of having their work product included in datasets used to train AI models.
  • Prevent nonconsensual deepfakes and strengthen individual control over the use of personal data, image, and likeness.
  • Safeguard the integrity of federal elections by prohibiting distribution of deceptive AI-generated audio or visual content to influence an election or solicit funds.

National Security

We must preserve America’s lead in the global race for AI innovation, while improving our defenses against AI-powered cyber, robotic, chemical, and bio weapons from state and non-state actors.

RECOMMENDATIONS:
  • Strengthen the US export control regime on advanced semiconductors and model weights to prevent China from developing and disseminating advanced AI capabilities.
  • Apply “Know Your Customer” regulations to U.S.-based cloud providers to prevent countries of concern and malicious actors from accessing cloud resources to bypass export controls and train powerful AI models.
  • Create reporting requirements for large data centers to allow tracking of large compute clusters.
  • Increase funding for the Commerce Department’s Bureau of Industry and Security to tighten enforcement of export controls on cutting-edge semiconductors.
  • Harden security at frontier AI labs to prevent adversaries from stealing our most valuable IP.

American Innovation

We want the US to continue to lead the world in AI innovation at the frontier of technological progress, and in diffusion throughout the economy.

 

RECOMMENDATIONS:

  • Reform our permitting system to make it easier to build our data centers and the energy production needed to power the AI revolution.
  • Increase funding for the Commerce Department’s National Institute of Standards and Technology (NIST) for work to help develop regulatory capacity on AI.
  • Fund research for developing and applying AI to the highest-value opportunities in medical research, healthcare, and materials science.
  • Invest in long-term data collection and publication that will power training runs for the next generation of AI models to solve scientific & public policy challenges.

Emerging Risks

As AI capabilities increase over the coming years, misuse and misalignment of powerful AI systems could present dangerous risks that we must try to mitigate.

RECOMMENDATIONS:

  • Create a voluntary incident reporting database to help regulators and experts track emerging risks and dangerous, unexpected behavior from AI systems.
  • Impose reporting requirements on the largest AI models to facilitate federal monitoring of capabilities, communicate suggested policy actions to relevant bodies throughout the government, and reduce risks.
  • Establish clear liability for AI agents.
  • Require advanced generative AI systems to be evaluated for dangerous capabilities, such as facilitating development of chemical, biological, radiological, and nuclear weapons.
  • Require screening of DNA synthesis requests to prevent proliferation of dangerous biological material.
  • Fund research in interpretability, robustness, safety, and security of AI systems.