AI-enhanced security tools: living up to their promise?
A look at three top AI/ML based security tools
The case for AI in cybersecurity is a compelling one. Organisations are becoming technologically more complex and more distributed all the time, with many having hundreds, thousands or tens of thousands of endpoints to look after. Services are a mix of cloud and on premises, with partners frequently accessing corporate networks via APIs.
At the same time there's pressure to do more with less. Delivery times are shorter, uptime expectations higher and to make things even harder, there's a shortage of skilled security professionals.
Plus, of course, adversaries themselves are using AI to sneak around the barriers put in place to stop them. Sophisticated attackers may be using Monte Carlo techniques to optimise their offensive strategy, which often unfolds in multiple stages before the ultimate target is reached.
Use cases
AI-equipped tools offer the potential to analyse millions of events in real time, identifying different types of threats, such as malware, phishing attempts or zero-day exploits and identifying risky or anomalous behaviour or unusual data downloads.
AI/ML can help with threat exposure, which involves looking for patterns in hackers' activities with reference to global databases, and leading on from that provide an assessment of data breach risk prediction.
Incident response, where AI powered systems provide improved context for triage in response to security alerts is another common use case.
It can also be very helpful for augmenting network policies on the fly. Policies such as access control lists are effective but they can be very cumbersome to enforce at scale. Plus, a complex environment means attributing vulnerabilities to specific applications can be very difficult. But AI can learn patterns of traffic and then suggest new policies to implement.
Most importantly, given the speed at which malware can rip through an organisation, AI-based systems can help reduce the threat response time.
And in the data centre for monitoring the facility's vital signs, such as energy use, temperature, and bandwidth and can this can be invaluable for optimisation. Google reported a 15 per cent reduction in power use after implementing such as system.
A spectrum of intelligence
So the case for AI almost writes itself: at scale, automation and real-time defences are essential. But where does simple automation stop and AI begin?
This can be hard to fathom and there is often no clear line of demarkation.
Sometimes, undoubtedly, ‘AI' starts in the vendor's marketing department. Rebadging, or AI-washing, is always something to watch out for and there is much scepticism about all things AI, justifiably so in many cases - but certainly not all.
There is also a muddying of the waters that separate ‘analytics', meaning a largely an after-the-fact process that examines large data sets, and ‘proper AI', which is all about iteration - learning and responding in real-time using acquired and derived information.
Machine intelligence lies on a spectrum with degrees of autonomy falling into three broad categories.
We can start with assisted intelligence where the main objective is to improve processes that are already in place. Repetitive, high volume manual processes are dealt with automatically in order that people may be freed up for more valuable activities. An example of this might be simple event sorting jobs or email spam filtering, jobs that can be described on a flowchart. Assisted intelligence can saves a lot of time and effort, but maybe it's not all that clever.
Rather more sophisticated is augmented intelligence, which enables people and organisations to do new things that they could not do before. Rather than cruncing massive sets of raw data they tend to ingest data streams such as those flowing from threat intelligence services. Results are nuanced and contextual and may include behavioural analytics.
Then we have autonomous intelligence, which is where systems can act based on their own learned analysis. AI can provide an overview of the whole system so that a defensive plan is generated on the fly while the attack is still in the early stages, allowing the defender to take action before major damage is done, or delegating action to the machines themselves.
Distinctions are rather fuzzy and solutions may span all three, but most of the developments seen so far in AI security are in the augmented analytics category, in areas such as user and event behaviour analytics (UEBA). These systems can feed back learnings to improve vulnerability databases in real time, and are very helpful in anomaly detection.
Increasingly, though, we're starting to hear more about autonomous systems too. A key use for such systems is IT asset inventory - gaining a complete, accurate and up-to-date map of all devices, users and applications across the organisation and monitoring and assessing any changes.
Not so clever
But as always, there are hurdles to overcome.
First, there can be a problem with false positives in the absence of a large amount of data for training - or a large budget for consultancy.
AI/ML security systems can also be heavy on bandwidth, CPU and memory, and there's a potential for them to be manipulated by attackers feeding them bogus data: dependency on machine intelligence can lead to a false sense of security.
Which of these are the most challenging areas when implementing AI-enhanced security solutions?
Explainability is another issue with AI in general: it can be very hard to know why a system has arrived at the decision it has and what exactly tipped it in one direction or another, but this is important for both security professionals themselves and also auditors, the board and other decision makers. Advances are being made in this field however
AI-enhanced security solutions can also be expensive, and since they tend to be an add-on rather than a replacement for existing security systems the cost may be hard to justify to the board.
Among 150 IT leaders asked about this topic by Computing Delta in June, integration and configuration was top of the grumbles heap, which is interesting because some vendors claim that there should be very little of that required of a self-learning system.
The AI-enhanced security tools market
Which of these vendors' AI-enhanced security solutions have you trialled (blue) or taken into production (orange)?
Microsoft was the vendor most of our respondents had tried, particularly the AI-enhanced SIEM service Azure Sentinel. Next up was Darktrace, perhaps the best-known specialist vendor in this space, then the more generalist security vendors Sophos and Symantec with Crowdstrike in fifth.
Lets have a quick look at some of the first three.
Microsoft
Microsoft Azure Sentinel is part of the Azure cloud. It's a SIEM system, which, by virtue of the fact it has access to an awful lot of learning data in Azure, is able to generate machine learning rules to detect and report anomalies across all the data sources across which it is configured. And, of course, it's integrated with the rest of the Microsoft Azure stack.
AI is used primarily to assist with threat detection, and users' own threat intelligence sources and bespoke machine learning models can be integrated with Sentinel in what Microsoft calls BYO-ML - bring your own machine learning.
Azure Sentinel achieved top marks for product roadmap, and the degree of autonomy possible. Elsewhere its ratings were good, without excelling in any particular area, which is what we've come to expect from Microsoft in the enterprise. It does most jobs perfectly well, but those with more niche requirements might want to look elsewhere.
For example, Azure Sentinel is cloud-only and one respondent looking for better on-prem coverage went with Vectra instead; for another Unix coverage was lacking.
Darktrace
Number two in the take-up rankings, and perhaps the best-known name in AI enhanced security was Darktrace.
Darktrace was described variously as ‘groundbreaking', ‘unique', ‘cutting edge' and as providing ‘the most credible offering in this space'.
Darktrace was one of the first vendors to offer a machine learning immune system approach, whereby the system learns what's normal and how much deviation it can tolerate before acting or alerting human minders, even claiming to have shut down a ransomware attack mid-flow.
As such it sits very much at the autonomous intelligence end of the spectrum described earlier.
According to our respondents, in the last 18 months Darktrace has really started to live up to its billing. It was very highly rated for innovation and product road map, and its scores were mostly better than last year when it was widely felt to be more of a slick sales presentation than an effective solution.
But it's not for everyone, and the main reason for that is price. "Too expensive and resource intensive. hard to sell to board," said one respondent.
Sophos
Sophos is perhaps not the first name that springs to mind in the AI space, but the vendor has been making a name for itself in this area of late, with a specialist team dedicated to building new models in areas such as email phishing protection, anti-impersonation and detecting zero day attacks.
Its main AI-enhanced product is Intercept X which uses deep learning techniques for malware detection, anti-ransomware, active adversary protection, and endpoint detection and response.
As an incumbent security supplier to a fair few of our respondents, Sophos was their first port of call when looking to add intelligence to their defences, and by and large they were satisfied.
The company scored solidly all round with and highly for its choice of licensing models, and also with its integration with current and planned environments. It's level of coverage also drew praise.
Sophos was seen as a more traditional cyber security company that is pivoting successfully to the new age. Communications, support and roadmap all drew positive comments.
For those mentioning a downside, once again it tended to be price.
Undoubtedly, products based on machine learning are costly to develop, and it's a fact of life that vendors always prioritise lucrative enterprises, which in this case is also where the need is greatest, but it's to be hoped that the benefits start to filter down at a lower cost to smaller organisations before too long as the market matures.
We'll be looking in more depth at AI-enhanced security solutions over the coming weeks.
You may also like
/news/4340185/chatgpt-maker-openai-lose-usd5bn-2024-report
Finance
ChatGPT maker OpenAI could lose $5bn in 2024, report
Another round of funding may be needed to keep it afloat
/news/4340182/crowdstrike-outage-cost-usd44m-fortune-500-company-report
Corporate
CrowdStrike outage to cost $44m per Fortune 500 company, report
A quarter of top US firms were hit by the update blunder
/news/4339044/crowdstrike-thousands-typosquatting-domains-registered-global-outage
Threats and Risks
CrowdStrike: Thousands of typosquatting domains registered after global outage
CrowdStrike says cybercriminals are attempting to install a new infostealer malware through fake fixes