Bad data and death: Ian Hill speaks about AI in warfare at the Cybersecurity Festival
21st century warfare will extend to land, sea, air and cyberspace
War drives technology development.
World War 1 brought tanks, artillery and radio; World War 2 saw the dawn of advanced encryption, radar and the atomic bomb. Even the internet has its roots in the US Department of Defense's research arm, the Advanced Research Projects Agency (ARPA).
The internet is here to stay - and that brings its own risks. "If it were to disappear overnight in the developed world, everything would grind to a halt. It would be anarchy and utter chaos," said Upp CISO Ian Hill, speaking at the Cybersecurity Festival this week.
Hill, who has made a career of studying the intersection of war and cyber, pointed out that although technology has brought many benefits to the modern world, humankind has a propensity to turn anything developed for good into a weapon.
The internet, as any security chief knows, is no different. In just the last five years, cyberattacks have affected critical infrastructure like oil pipelines, water supply and healthcare, and the impact is being felt in the physical space.
"Cyber weaponry, as well as being intended to attack specific targets, can cause vast amounts of collateral or unintended damage," said Hill, pointing to the 2017 NotPetya attack. Originally designed to attack Ukraine, NotPetya spread around the world to affect companies like Maersk and Cadbury.
Attacks on commercial companies can cost hundreds of millions, but can be even more dangerous in the public sector. As Hill described it, "This is when it starts to get serious."
A very recent example is the 2021 attack against the Irish Health Service (HSE), massively delaying various treatments when the system was already under pressure from the COVID-19 pandemic.
"This is a health service. There are people there trying to help and cure people... There were no reported deaths, but there were suggestions deaths could have occurred indirectly because of delays."
Other cyberattacks with real-world consequences include a tram derailment in the Polish city of Lodz in 2008, and Stuxnet in 2009.
Where are AI's limits?
These attacks are concerning on their own; but when artificial intelligence is involved, the potential scale ramps up quickly.
The scale is not even the most worrying part. Hill disputes the description of modern systems as artificially intelligent.
"They're not intelligent at all - it's all machine learning. If what they've learned is not correct, how can they be expected to make a difference? If AI is taught to kill, would it know the limits and when to stop?"
Considering this, perhaps the biggest danger of AI - or ML - comes down to not what it's been instructed to do, but the training data.
Evidence is easy to come by. Chatbots trained on the open internet, like Microsoft's Tay and Yandex's Alice, have quickly devolved into racism and advocating violence. Amazon's attempt at algorithmic recruitment showed evidence of sexism because of its biased training data.
Imagine those same flaws applied to an AI in control of a fleet of armed drones, or with missile launch codes.
"If you teach an AI that it's okay to kill someone for various reasons, it might expand and decide that it's okay to kill everybody."
Unfortunately, Hill concluded, the genie is out of the bottle. Despite warnings and pledges against autonomous weapon systems, the technology exists, and we have to live with the consequences.
The Cybersecurity Festival was a great success and will return next year. Computing's next hosted event, the IT Leaders Summit, will take place this October at Down Hall in Essex. Click here to register for a free place now.