Google Cloud launches 2025 cybersecurity forecast
AI will play a more prominent role but perhaps not the one you think
Google Cloud yesterday released its cybersecurity forecast for 2025. The report draws on Google Cloud’s own security leaders, and features insights from numerous researchers, analysts and responders from across the Google Cloud security ecosystem.
At a launch event, Stuart McKenzie, Managing Director of Mandiant Consulting, EMEA, Google Cloud set out the forecast for the use of AI early on. In summary, both attackers and defenders will use AI more, but some of the perceptions about the risks AI poses when in the hands of attackers are a little overblown.
“We are still very early on in attacker use of AI,” said McKenzie. AI doesn’t give an amateur the ability of a skilled attacker. I think we will see that evolution begin though.”
“AI will also begin to play a more prominent role in influence operations. It feels like a natural step to amplify messaging. I think we’ll see the second wave of AI. We’ve finished the first wave where we’ve begun to play with AI. AI will do a lot of the toil (on both sides) but there will still be humans in the loop.”
Dr.Jamie Collier, Senior Threat Intelligence Advisor, Google Cloud set out some of the analysis about the risks posed by the “Big 4” – Russia, North Korea, China and Iran.
“These are very mature espionage actors,” he said. “What we’ve seen a is real shift in Russian operations to the Ukrainian frontline. Before they were going after civilian targets and infrastructure. That’s really quietened down. Now Russia has changed focus to GPS systems and frontline infrastructure. There’s been a huge focus on Ukrainian frontline mobile devices and we’ve seen Russia compromise those in many ways.”
“We’ve also seen watering hole attacks on IoS devices elsewhere in world.”
North Korea is also changing its game using tech workers to infiltrate European organiations. It sounds far-fetched but lots of organisations have spotted this sort of activity. The goal isn’t always espionage either, much malign activity seems to be motivated by the need to fund the North Korean regime.
What is resilience?
The launch of the report generated a fascinating discussion on the nature of resilience.
Stuart McKenzie said: “When we talk to organisations about resilience, we find a lot of them think we mean resilience again an attack now. What we mean when we’re talking to CNI organisations [Critical National Infrastructure] is resilience against the attacks of the future.
“In a lot of the early attacks against Ukrainian CNI the implants were there 6 or 7 years beforehand. This is not something that happens quickly. You aim to have implants in place years in advance. For resilience in CNI you have to think about how you are protecting it for 20 or 30 years. When we talk to energy and water providers - their tech is there for the next 50 years. It’s not meant to change.
“There is more to resilience than the short-term focus of “if we are attacked how can we rebuild?”
According to Goggle Cloud and the analysts at the launch, the way these states and their criminal communities are using cyber has changed. North Korea for example isn’t trying to compromise organisations. The goal is longer term influence and power.
This has shown in some quite dramatic shifts in supply chain compromise. Why compromise the next SolarWinds and make a whole load of noise when you can go after open-source libraries, developer tools like GitHub repositories managed by a few people?
What of the scenario that we all like to frighten ourselves with? That AI will turn into a sort of arms race between the those who would use it constructively and those who have more malign motivations?
Collier reiterated the point that now, AI will probably just help unsophisticated threat actors become moderately more sophisticated. It’s not changing the nature of their craft – yet. He also said:
“There’s been some lazy framing of the AI discussion with the arms race analogy,” he said. “On one hand you have actors using AI for social engineering and we’re already empowered to stop that. We don’t need fancy AI tools to stop those threats.”
“On the flipside if we talk about AI’s role in a security team, we shouldn’t measure the success of that by its ability to detect AI threats. We should measure it by the extent to which it helps to address pain points in security teams like alert fatigue and general toil. We should measure AI on its ability to address those common pain points rather than its ability to defeat a hypothetical threat.”