HSBC voice recognition system cracked by twins
Cracking of HSBC's phone-banking security system comes one month after a warning that voice recognition security is critically flawed
HSBC's voice recognition security system has been cracked by a journalist using his twin brother to mimic him in an experiment that underlines warnings that relying on voice recognition alone is insecure.
BBC Click reporter Dan Simmons set up an HSBC account, signing up to the bank's voice ID authentication service. HSBC says the system is secure because each person's voice, it claims, is 'unique'.
"But the bank let Dan Simmons' non-identical twin, Joe, access the account via the telephone after he mimicked his brother's voice."
HSBC launched the technology in 2016, claiming that it was totally secure. However, Simmons' brother claims that it only took him eight attempts to unlock the system.
"What's really alarming is that the bank allowed me seven attempts to mimic my brothers' voiceprint and get it wrong, before I got in at the eighth time of trying," he said. "Can would-be attackers try as often as they like until they get it right?"
HSBC is remaining tight-lipped. In a statement, it admitted to the BBC that twins remain an anomaly of the system.
"The security and safety of our customers' accounts is of the utmost importance to us. Voice ID is a very secure method of authenticating customers," said a spokesperson.
"Twins do have a similar voiceprint, but the introduction of this technology has seen a significant reduction in fraud, and has proven to be more secure than PINS, passwords and memorable phrases."
However, the cracking of the HSBC voice recognition security system comes just a month after a French artificial intelligence start-up Lyrebird claimed that they would soon be rendered highly insecure by the onward march of technology.
And, just months after HSBC launched its voice recognition system, the University of Alabama at Birmingham in the US warned that relying solely on voice for authentication or automation might leave systems vulnerable to voice impersonation attacks.
"Advances in technology, specifically those that automate speech synthesis such as voice morphing, allow an attacker to build a very close model of a victim's voice from a limited number of samples. Voice morphing can be used to transform the attacker's voice to speak any arbitrary message in the victim's voice," warned the University.