Machine learning in attack detection - what it can and can't do

MWR InfoSecurity's Luke Jennings examines how machine learning should - and shouldn't - be applied to security

The technology industry has long had an issue with hype, forever focusing on "the next big thing" and how it will turn the world on its head. Vendors tend to latch on to the latest hot topic and begin making extravagant promises as they vie for a spot in what they hope is the next big growth market.

Analyst group Gartner listed machine learning at the very top of its aptly named "peak of inflated expectations" section in its 2016 ‘Hype Cycle for Emerging Technologies'. The next part of the cycle is traditionally the "trough of disillusionment" - and it remains to be seen how deeply machine learning will fall into this dip.

It's easy to see why machine learning has become so overhyped in the security field. With attackers growing steadily more sophisticated and bringing more advanced techniques to bear, calling in an inhumanly fast and powerful machine intelligence to find and stop threats sounds almost too good to be true.

In the drive to grow market share, many vendors will market what sells, which is not necessarily what works

But exaggerating the power and effectiveness of security technology can be very dangerous. Companies that have bought into inflated claims can be left with a false sense of confidence, leaving them vulnerable to attack - and creating a bad reputation for the rest of the industry.

At the root of this hype problem is the way the industry tends to operate.

In the drive to grow market share, many vendors will market what sells, which is not necessarily what works, and there is some worryingly misleading marketing out there. We believe the defensive side of the industry is flooded with vendors who lack real passion about levelling the playing field between offensive and defensive security - being more passionate about making money instead.

Another long-standing problem, common in all technical fields, is that the vendors can take advantage of their greater level of technical expertise and familiarity with the solution and its complexities.

Very few C-level executives, and indeed only select IT heads, will have in-depth knowledge of how machine learning operates, but will be aware of the term thanks to the current buzz around it. Combined with its overall complexity, the hype level makes it even easier to miss-sell machine learning as a miracle solution for all threats.

The good and the bad - machine learning in action

Hype issues aside, machine learning does have an important role in security - as long as its limitations can be overcome and it is applied to problems for which it is well suited.

Machine learning is a diverse subject, but essentially it comes down to a program learning from data in order to make predictions and discover information - but without being explicitly programmed with a set of rules.

In anti-virus software, machine learning could potentially identify malware in a different way to the signatures relied on by traditional solutions

This capability has many different potential applications for security, but the two areas that have received the most attention are next generation anti-virus, and user and entity behaviour analytics (UEBA).

In anti-virus software, machine learning could potentially identify malware in a different way to the signatures relied on by traditional solutions, while UEBA seeks to establish a baseline of normal behaviour for users and machines, and spotting malicious activity which deviates from the norm.

An example where we could expect machine learning to shine is in detecting a brand-new variant of malware like Andromeda.

As long as the machine learning model has been well-trained using a large number of examples from the Andromeda family, it should be adept at spotting familiar characteristics in a new version. Authors would have to make specific efforts to bypass the model.

It is, of course, possible to achieve this but it may well provide better detection in the face of new variants than a traditional signature.

By contrast, an example where machine learning falls short might be the expectation that it could detect malicious use of legitimate software, such as an SSH client or port scanner.

These may very well be classified as benign as they are not specifically malware and are commonly used by system administrators. It is very difficult for a model to judge the intent, malicious or benign, behind the actions of the operator.

Machine learning in attack detection - what it can and can't do

MWR InfoSecurity's Luke Jennings examines how machine learning should - and shouldn't - be applied to security

The use of custom malware in targeted attacks is also a serious issue. If the malware is entirely new, there won't be any representative samples available to supply an algorithm for training. As an unknown piece of software, it may still be flagged as anomalous compared to the expected model.

However, the same is true of much of the legitimate software out in the real world, so allowing the solution to block anything appearing anomalous would be extremely disruptive. Additionally, sophisticated attackers are likely to make specific efforts to ensure it is classified as benign software, in the same way that they ensure their malware bypasses the signatures used by traditional anti-virus.

Where machine learning falls short might be the expectation that it could detect malicious use of legitimate software, such as an SSH client or port scanner

For UEBA approaches, we would typically expect to model a variety of data flows from network and log sources over time, and then highlight deviations from the norm that could be indicative of internal reconnaissance activities, lateral movement or data exfiltration.

A database server that does not usually connect to the internet suddenly transferring 500 GB of information would be highly anomalous. Likewise, an administrative service account logging in to a large number of systems from a different system it does not normally operate from may well be indicative of stolen credentials being used for lateral movement.

However, let us consider some other examples. Keystroke logging and sensitive document theft from targeted user endpoints would be very concerning, but is likely to involve data quantities that are well within normal limits.

Lateral movement between workstations on the same subnet would be a concern, but may also be invisible due to network sensor coverage not reaching that far and log sources only collected from key servers instead of the entire endpoint estate.

Command and control channels may make use of perfectly common and legitimate services, such as cloud instant messaging services and additionally users are not entirely predictable and so may regularly access systems or services in ways they have not previously, but that be perfectly benign.

These examples highlight the three major blind spots suffered by machine learning:

  1. Malicious activities that we cannot detect because they appear normal with the data we have;
  2. Non-malicious activities that regularly appear abnormal and generate lots of false positives;
  3. Malicious activities that cannot be detected because we simply do not have the data required.

Overcoming the limitations - the human touch

The most important lesson is that the average enterprise network is a highly anomalous environment - anomalies are so common that there will be a continuous flow of false flags, while actual malicious activity can be disguised in seemingly normal behaviour.

Machine learning does have an important role in security - as long as its limitations can be overcome and it is applied to problems for which it is well suited

To overcome this weakness, machine learning needs to be combined with human expertise. A highly skilled and offensively trained team is required to properly interpret and investigate the findings. Years of experience in detecting and stopping attacks means that skilled security teams are able to spot the more subtle and organic signs of malicious intrusion that would otherwise pass for normal activity.

Additionally, it is important to recognise that machine learning approaches are just one analysis technique, with advantages and disadvantages, and an effective attack detection system will consist of many different techniques that complement one another.

It will take some time for the machine learning hype to die down, and organisations must be wary of any vendor that promotes their machine learning product as a cure-all to solve any attack detection problem. The speed and scale of analysis delivered by a machine learning algorithm is very powerful, but blinkered by enough limitations that it can only assure security in specific scenarios.

However, when applied to the right problems, and combined with sufficient human experience and other analytical approaches, machine learning is a useful tool against the increasing threat of cyber-attack.

Luke Jennings if chief research officer for Countercept at MWR InfoSecurity, and can be contacted via Twitter