Cyber criminals use AI to mimic CEO's voice and trick manager into transferring £200,000

The manager heard familiar voice patterns of his boss and suspected nothing in the call

The manager of a firm was tricked into transferring $243,000 to a bank account controlled by fraudsters after they used AI software to mimic the voice of his boss.

The crime, in which cyber criminals used AI-based software to mimic the voice of the CEO of a German parent company of an undisclosed British energy firm, occurred in March, according to the Wall Street Journal.

Being able to fake voices takes fewer recordings to produce

The gang called the senior executive of the British firm on the phone and used the software-generated voice to trick him into making an urgent money transfer to account of a supposed Hungarian supplier.

The executive was assured of the immediate reimbursement of the money being transferred by him to the firm in Hungary. Hearing the familiar sound of his boss, and therefore suspecting nothing amiss, the manager made the transfer as requested.

The payment, however, was never reimbursed, and the cyber criminals tried their chances a second time, contacting the executive again, asking him to make another urgent money transfer.

This time, though, the executive noticed the call coming from Austria and refused to make the payment. He also started an investigation at his end.

According to the WSJ, the funds transferred to Hungary were eventually moved to Mexico and then "distributed to other locations". The case is currently under investigation, but the perpetrators have not yet been identified.

Voice phishing (vishing) attacks are not new, but this is likely the first instance where criminals used AI to carry out a voice-spoofing attack, says Rüdiger Kirsch, a fraud expert at Euler Hermes Group, the insurer that ultimately covered the entire cost of the payment.

Jake Moore, a cybersecurity specialist at ESET, has warned of a "huge rise in machine-learned cyber-crimes in the near future".

He continued: "We have already seen DeepFakes imitate celebrities and public figures in video format, but these have taken around 17 hours of footage to create convincingly. Being able to fake voices takes fewer recordings to produce," Moore said.

In a report last year, Pindrop, a company that develops security software and protocols for call centres, revealed that the rate of voice-related frauds - primarily affecting banks, credit unions, brokerages, insurers, and card issuers - increased by more than 350 per cent in the four years from 2013 to 2017.

Overall voice-channel frauds between 2016 and 2017 also rose by 47 per cent - accounting for one in every 638 fraudulent calls.

In 2017, a study by researchers from the University of Eastern Finland warned that voice recognition systems used by various organisations as a form of biometric authentication can be easily fooled by impersonators.

Last year, China's Baidu was reported to have developed 'Deep Voice' software that could clone a human voice with less than four seconds of training, thus raising fears about the security of biometrics.