Ban military from weaponising artificial intelligence, urge Hawking, Wozniak, Musk and over a thousand tech experts

If the military develops AI weapons then it will only be a matter of time before they appear in the hands of terrorists, dictators and warlords, say experts

Military artificial intelligence should be banned according to a letter signed by scientist Stephen Hawking, Apple co-founder Steve Wozniak, and more than 1,000 other technology experts, researchers and scientists.

Other signatories of the letter include business magnate Elon Musk, Google DeepMind CEO Demis Hassabis, and cognitive scientist Noam Chomsky.

The rise of artificial intelligence over the past few years has been striking, with many believing that AI will put some jobs at risk. The likes of Google, Microsoft and Facebook have all invested in artificial intelligence, and new start-ups touting the use of AI as a basis for their products are seemingly appearing every week. One artificial intelligence expert said that he would be astonished if the US National Security Agency (NSA) wasn't using artificial intelligence to scan communications.

But AI and robotics researchers believe that using artificial intelligence for weapons, such as armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, is a step too far. They say that autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

The researchers believe that if any major military power pushes ahead with AI weapon development, then a global arms race is "virtually inevitable" and that unlike nuclear weapons, which require costly and hard-to-obtain raw materials, autonomous weapons will be cheap and easy to get hold of for all significant military powers to mass-produce.

"It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc," the letter reads.

"Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity," the researchers said.

But the signatories emphasised that AI could be used to make battlefields safer for humans - particularly civilians.

They liken AI specialists to chemists and biologists who have supported international agreements that have successfully prohibited chemical and biological weapons, and physicists who have supported treaties banning space-based nuclear weapons and blinding laser weapons.

"Most AI researchers have no interest in building AI weapons - and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits," they said.

"We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control," they concluded.