Voice morphing software will enable hackers to launch these attacks, a team at the University of Alabama, Birmingham (UAB) has found. The UAB team warmed that people could inadvertently leave voice samples as part of daily life.
The research team looked at the implications stealing voices had on human communications as its other application for the paper’s case study. The UAB study – a collaborative project involving UAB College of Arts and Sciences Department of Computer and Information Sciences, and the Center for Information Assurance and Joint Forensics Research – took audio samples and demonstrates how they can be used to compromise a victim’s security and privacy.
Once the attacker defeats voice biometrics using fake voices, he could gain unfettered access to the system, which may be a device or a service, employing the authentication functionality. Results showed that a majority of advanced voice-verification algorithms were trumped by the researchers’ attacks, with only a 10-20% rate of rejection. On average it was also found that humans tasked with verifying voice samples only rejected about half of the morphed clips.
Every day we send out a free e-mail with the most important headlines of the last 24 hours.
Subscribe now