Curbing the rise of fake domains and misinformation campaigns
A failure to curb the growing problem of misinformation could have serious repercussions for the Internet and for society as a whole
Misinformation is by no means new. Yet, the mass uncertainty and anxiety caused by Covid-19 has led to a sharp uptick in tactics used to spread misleading news, falsified evidence and incorrect advice.
In the early stages of the pandemic, cybercriminals quickly exploited the global crisis by registering fake domains relating to the Corona virus. The Neustar team, for instance, were tracking nearly 30,000 of these by the end of March. Whether linked to the virus or not, the motives behind fake domains are geared towards a similar end-goal: to give attackers an air of authority, reliability and urgency, essentially tricking vulnerable targets into trusting them.
Indeed, the scale of the issue has forced tech giants to take greater levels of accountability for the spread. Facebook recently announced new measures to control the spread of false information, after it was revealed that websites spreading fake advice had attracted nearly half a billion views on the social media platform alone.
The threat of misinformation has not gone unnoticed by the cybersecurity community. New research from the Neustar International Security Council (NISC) found that the majority of cybersecurity professionals (91 per cent) felt that stricter measures should be implemented on the Internet if the issue continues.
While there is no easy fix to the problem, businesses need to be vigilant. From spotting fake domains to protecting remote working environments and deploying global taskforces, times like these require an always-on approach.
Fake and zombie domains
The majority of malicious actors are still using misinformation and fake domains for the same purposes: phishing, scams, ransomware and other profit-seeking attacks. However, it is not as straightforward to defend against fake domains as it may seem.
In the case of fake domains linked to the coronavirus, many legitimate domains are being registered containing terms such as ‘Corona' or ‘Covid-19'. Some of these are performing vital activities like helping to process tests or share critical advice. As a result, simply blocking domains is not an option.
To complicate the issue, we're also seeing a rise in zombie domains. This involves attackers acquiring the domain from a recently closed business on a secondary market and using it to launch DNS attacks against an organisation without causing suspicion.
For example, an attacker could purchase a domain on the secondary market from a restaurant that has gone out of business. This domain would have previously got a lot of traffic and then zero. When bought, the site would suddenly be active and seeing traffic again. Organisations need to be able to spot this pattern relating to zombie domain attacks. Anything fitting into this description should be considered suspicious until it is thoroughly analysed.
The threat of emerging technology
Despite social media misinformation reaching a state of maturity, we're approaching another phase of viral misinformation in the form of the development of deep fake technology. Threatening to erode trust even further, this is approximately five years ahead of our ability to guard against it.
Currently, evaluating whether a video is real or fake is a massive challenge. The cybersecurity community is working on a range of solutions including technologies that allow individuals to sign and authenticate images or use quantum computing to develop crypto-algorithms that can detect whether a bit or pixel has been altered between original transmission and reception. In the meantime, however, it is crucial that organisations are aware of the inevitable damage it can cause.
Preventing misinformation
Evidently, solving the problem of misinformation is complex. According to the NISC research, only 36 per cent of cybersecurity execs are very confident with their organisation's ability to successfully identify misinformation and fake domains.
To feel confident in their defences, organisations need to have a clear understanding of the current picture. Queries leaving the network should be monitored carefully, which involves looking at the size or depth of the query. DNS only allows you to do 63 characters for every dot, so when inspecting the domain, anything to the left of .com can only be this length. Teams also need to look at the character strings. The Mirai botnet, for instance, randomised the first 12 characters before the dot. Importantly, newly created domains are easier to spot than zombie domains. These can get around the filters and organisations may not realise they are part of exfiltration or malware campaign until they've caused destruction.
As many businesses switch to a remote working model, a cause for concern is the change in how DNS queries are identified. In corporate IT infrastructure, a tool that looks at these and filters them for employees on the network. In a remote environment, everything comes through an ISP which can leave an organisation vulnerable. It is likely that employees' home networks aren't sufficiently equipped, as often the software to protect the endpoint hasn't been installed or tools haven't been set up to go behind the VPN through the corporate infrastructure.
Organisations also need to be alert to how their networks and, in turn, brand could be used to spread misinformation. On an open Internet where people can register domains freely and spread information via social media, the risk is growing significantly. Alongside implementing technological solutions, businesses need to develop global taskforces to monitor and shut down fake domains and false evidence.
Ultimately, curbing the spread of misinformation and fake domains is everyone's responsibility. A failure to do so could have serious repercussions not only for the Internet, but for society as a whole.
Rodney Joffe is SVP and fellow, Neustar and chairman of Neustar International Security Council