Group calls for enforceable standards to ensure human control over lethal force
A diverse coalition of faith leaders today is calling on Congress to act swiftly to establish clear, enforceable guardrails on the use of AI in military operations, warning that the rapid integration of artificial intelligence into warfare is outpacing the legal and moral frameworks needed to govern it.
In a letter sent to leaders on the Senate and House Armed Services Committees, the cross-faith group underscores the urgency of ensuring human oversight and meaningful safeguards for autonomous AI weapons as AI capabilities are deployed in military systems. Signed by 20 faith leaders, the letter warns that existing Pentagon guidance, centered on vague standards like “appropriate levels of human judgment,” is insufficient to govern decisions of such consequence.
As global tensions rise and religious leaders warn of the escalating human costs of war, the letter calls for clear rules to guarantee human monitoring of autonomous weapon systems. “Every great faith tradition teaches us that the taking of a life is a grave and solemn act,” the faith leaders say. “As a matter of conscience, we believe that such a decision must never be delegated to a machine.”
Public opinion strongly supports this approach. A national poll released by Americans for Responsible Innovation last month found that 75% of Americans agree that decisions about using lethal force in war should involve meaningful human judgment rather than being left entirely to AI systems. Meanwhile, 76% say they are concerned that AI tools could enable unprecedented government surveillance of American citizens.
###
Americans for Responsible Innovation (ARI) is a nonprofit organization dedicated to policy advocacy in the public interest, focused on emerging technologies like artificial intelligence (AI). Learn more at ARI.us.