US should not ban AI weapons - panel says autonomous systems can save lives
The USA has a 'moral imperative' to explore autonomous weapons systems, says the National Security Commission on Artificial Intelligence
The National Security Commission on Artificial Intelligence (NSCAI) has advised the US government not to ban the development of AI-powered autonomous weapons, saying that they can - counterintuitively - save lives.
According to Reuters, the Commission said in its draft report to Congress that the US has a 'moral imperative' to explore such weapons, which could eventually lead to lower numbers of casualties in war.
The panel is led by former Google CEO Eric Schmidt. Robert Work, a former deputy secretary of defence, is vice-chair.
Work argues that AI-powered weapons should make fewer errors than human soldiers, resulting in fewer clashes caused by target misidentification. He said the USA has "a moral imperative to at least pursue this hypothesis."
In a public discussion on whether the US needs AI weapons for national security, the panel members discussed the subject of human rights associated with the use of autonomous weapons.
A coalition of NGOs has been making efforts for the past eight years for an international treaty that would completely ban the use of autonomous weapons systems by armed forces worldwide. The 'Campaign to Stop Killer Robots' argues that human control is necessary to judge the proportionality of the attacks and to assign blame for war crimes.
The group claims that 30 countries, including Pakistan and Brazil, are ready to sign the treaty.
Commenting on the US panel report, Mary Wareham, coordinator of the campaign group, said the NSCAI's "focus on the need to compete with similar investments made by China and Russia" will only encourage arms race across the world.
News of the draft report comes at a time when several countries worldwide are investigating AI weaponry. China is recruiting teenagers straight out of school to work on such systems, and the USA's Space Command (Spacecom) is allegedly increasing its use of AI and machine learning to stay ahead of its enemies.
"We must innovate to achieve and maintain our competitive advantage," Army Gen. James Dickinson, commander of Spacecom, said on Tuesday during a virtual event hosted by the Mitchell Institute for Aerospace Studies.
"We must evolve cyber operations in order to maintain an agile and resilient posture. And we must invest in game-changing technologies to include artificial intelligence and machine learning," Dickinson added, according to National Defense Magazine.
However, not all people think that way.
In 2018, more than 2,000 scientists working in AI field signed a pledge not to develop or manufacture "lethal autonomous weapons". Tesla founder Elon Musk, and three co-founders of Google's London-based DeepMind AI subsidiary, were among the signatories.
In 2019, Microsoft president Brad Smith said autonomous weapons put civilians' safety at risk - and, a year later, ex-Google engineer Laura Nolan warned that incorporating AI into military weapons could have dire consequences, including accidentally starting the next world war.
Nolan resigned from Google in 2018 after being assigned to a US military drone project. Google eventually let its contract for the military project lapse in March 2019 after more than 3,000 Googlers signed a petition condemning the company's involvement in military projects.