Microbiologists raise concerns about widening gap between AI capabilities and security
New research published in Science has found loopholes in the screening process for synthetic proteins that can be used to create biological weapons and dangerous synthetic organisms. The study authors, including leading scientists at Microsoft, found that by tweaking the structure of known, harmful proteins, a bad actor could successfully evade the detection of software meant to screen for harmful toxins and pathogens. While the security loophole has now been patched, microbiologists are raising concerns about the widening gap between AI’s protein synthesis capabilities and current security practices.
“We need to start taking the threat of AI-designed bioweapons seriously,” said ARI President Brad Carson. “This month’s paper from leading biosecurity experts exposed simple tweaks that bad actors could make to consistently get away with generating harmful synthetic proteins. It’s like if someone found out they could take a hand grenade, paint it a different color, then walk it through TSA with no problem. The government has an important role in updating security around access to high-risk biomaterials in the AI era. We shouldn’t be waiting to find out who discovers the next loophole.”
In May, the Trump Administration issued an executive order aimed at addressing nucleic acid synthesis screening. Last month, ARI hosted an event with one of the primary authors of the Trump Administration’s AI Action Plan, Dean Ball, to discuss new federal policies shaping the future of AI and biosecurity.
###
Americans for Responsible Innovation (ARI) is a nonprofit organization dedicated to policy advocacy in the public interest, focused on emerging technologies like artificial intelligence (AI). Learn more at ARI.us.