US Govt Panel: Don’t Ban AI-Powered Autonomous Weapons
The panel urged Washington to “adopt AI to change the way we defend America, deter adversaries, use intelligence to make sense of the world, and fight and win wars.”
A government-appointed commission has argued that the US should not ban the use or development of artificial intelligence-powered autonomous weapons.
In a draft report for Congress, the National Security Commission on Artificial Intelligence, led by former Google CEO Eric Schmidt, argued that to defend America, artificial intelligence must be used to gather intelligence, counter other AI-enabled technologies, and help militaries “prepare, sense and understand, decide, and execute faster and more efficiently.”
During two days of public discussion, the panel’s vice chairman, former deputy secretary of defense Robert Work, said utilizing autonomous weapons in battle could lead to fewer confrontations and casualties due to target misidentification because such systems make fewer errors than humans.
“It is a moral imperative to at least pursue this hypothesis,” he said, arguing that “autonomous weapons will not be indiscriminate unless we design them that way.”
In the report, the panel urged Washington to “adopt AI to change the way we defend America, deter adversaries, use intelligence to make sense of the world, and fight and win wars.”
“The AI promise — that a machine can perceive, decide, and act more quickly, in a more complex environment, with more accuracy than a human — represents a competitive advantage in any field. It will be employed for military ends, by governments and non-state groups,” it further said.
Human Control over AI
The prospect of a lack of human accountability and potential loss of control over AI-enabled autonomous weapons have been a source of concern for some. Critics argue that such weapons or “killer robots” circumvent human morality and decision-making and may, therefore, potentially violate international humanitarian law and considerations of proportionality.
Earlier this week, US Army Futures Command Gen. John Murray suggested that when it comes to defending against drone swarms, AI-enabled combat systems may need to be given more autonomy to be able to identify and terminate enemy assets with sufficient speed.
Murray’s statement indicates a departure from the Pentagon’s rules on the use of autonomous weapons. Thus far, the Defense Department has been consistent about emphasizing the importance of human control when firing deadly weapons.