Sign In

Talking tough: banks boost security with voice ID

Voice ID sys­tems save banks and oth­er insti­tu­tions mil­lions every year, but is it a reli­able iden­ti­ty mark­er in the fight against fraud?


Share on X
Share on LinkedIn
Share by email
Save in your account

How safe is your voice as an iden­ti­ty mark­er? As bio­met­ric tech­nol­o­gy con­tin­ues to make strides, opin­ion is split on whether voice tech is a bless­ing or a curse when it comes to fight­ing fraud.

Bio­met­rics are based on phys­i­cal or behav­iour­al mea­sure­ments, like facial recog­ni­tion or an individual’s hand move­ments. Voice scans authen­ti­cate a person’s iden­ti­ty based on vocal modal­i­ties such as pitch and inten­si­ty, which are com­pared against an exist­ing data­base of voice sam­ples.

HSBC UK’s voice ID tech­nol­o­gy pre­vent­ed £249 mil­lion worth of fraud in the last year, accord­ing to the bank. Since its launch in 2016 the tech­nol­o­gy has pre­vent­ed £981 mil­lion of cus­tomers’ mon­ey from falling into the hands of fraud­sters, with the rate of attempt­ed fraud down 50% year-on-year as of May 2021.

“Tele­phone fraud­sters may attempt to imper­son­ate cus­tomers by steal­ing or guess­ing per­son­al infor­ma­tion to pass secu­ri­ty checks, but repli­cat­ing someone’s voice is much more dif­fi­cult,” says David Call­ing­ton, head of fraud at HSBC UK. 

Voice ID detects whether the voice match­es that on file for the cus­tomer “and there­fore whether the caller is gen­uine”, Call­ing­ton explained. The bank’s sys­tem allows it to make changes to dif­fer­ent secu­ri­ty set­tings: for exam­ple, lim­it­ing the num­ber of attempts that can be made before man­u­al autho­ri­sa­tion is required. It reg­u­lar­ly reviews and changes the sys­tem to enhance secu­ri­ty, Call­ing­ton added. 

NatWest also uses voice bio­met­rics as an alter­na­tive to secu­ri­ty mech­a­nisms based on pass­words to oth­er sta­t­ic iden­ti­fiers, which can be stolen or for­got­ten. The bank deploys a voice bio­met­ric solu­tion from AI-based speech recog­ni­tion firm Nuance, which screens incom­ing calls and com­pares voice char­ac­ter­is­tics – includ­ing pitch, cadence, and accent – to a dig­i­tal library of voic­es asso­ci­at­ed with fraud against the bank. The soft­ware quick­ly flags sus­pi­cious calls and alerts the call cen­tre agent to poten­tial fraud attempts.

As well as a library of “bad” voic­es, NatWest agents now have a “whitelist” of gen­uine cus­tomer voic­es that can be used for rapid authen­ti­ca­tion, with­out the need for cus­tomers to remem­ber pass­words and oth­er iden­ti­fy­ing infor­ma­tion.

Jason Costain, head of fraud pre­ven­tion at Natwest, says the bank “can detect when we get a fraud­u­lent voice com­ing in across our net­work as soon as it hap­pens”. Voice bio­met­ric tech­nol­o­gy is giv­ing it a clear pic­ture of what its cus­tomers sound like – and what crim­i­nal voic­es sound like, too. 

“Using a com­bi­na­tion of bio­met­ric and behav­iour­al data, we now have far greater con­fi­dence that we are speak­ing to our gen­uine cus­tomers and keep­ing them safe.”

War of attrition

How­ev­er, the rise of “deep­fakes” means that voice bio­met­rics can be cloned and used to fraud­u­lent ends. As the tech­nol­o­gy improves and becomes more wide­ly avail­able, fraud­sters fol­low the mon­ey, says Susan Mor­row, head of R&D at Avo­co Secure, a dig­i­tal iden­ti­ty spe­cial­ist. They then cre­ate sys­tems to exploit the tech­nol­o­gy using the same tech­niques.

While bio­met­ric tech­nol­o­gy is often viewed as the ulti­mate in authen­ti­ca­tion and ver­i­fi­ca­tion, “this is a war of attri­tion, and voice bio­met­rics – like any oth­er tech – can only be seen as risk-reduc­tion, not a cure,” says Mor­row. “Just as deep­fakes for video have arisen, deep­fakes for audio will increas­ing­ly be used for crimes that involve imper­son­ation.”

So how reli­able is voice as a bio­met­ric mark­er, and should banks and pub­lic ser­vices rely on it? Secu­ri­ty is not achieved by a sin­gle mea­sure, espe­cial­ly when a sys­tem has mul­ti­ple mov­ing parts, as is the case with pay­ments, says Mor­row. 

“Voice bio­met­ric is a use­ful mea­sure but it is only part of an over­all sys­tem, and it will be exploit­ed. As with any sys­tem, secu­ri­ty mea­sures need to be part of an ecosys­tem of checks and mea­sures.”

As cus­tomers part with their bio­met­ric data, there’s also an issue of trust. 

Research by iden­ti­ty and authen­ti­ca­tion firm Call­sign shows that just 38% of con­sumers feel com­fort­able using sta­t­ic bio­met­rics, such as fin­ger­print ID or facial recog­ni­tion, to con­firm their iden­ti­ty when using a ser­vice or buy­ing a prod­uct.

“The prob­lem with sta­t­ic bio­met­rics is that it’s intru­sive and not pri­va­cy pre­serv­ing,” says Chris Stephens, head of solu­tion engi­neer­ing – UK, Europe and South Africa at Call­sign. “Sta­t­ic bio­met­rics are also prone to inher­ent bias­es and once com­pro­mised, there is noth­ing any­one can do to stop attack­ers get­ting in.” 

How­ev­er, a recent sur­vey by GetApp, a Gart­ner com­pa­ny, shows that younger gen­er­a­tions seem more com­fort­able with the idea of using bio­met­ric tech­nol­o­gy like voice scan com­pared with old­er gen­er­a­tions. More than half of Gen­er­a­tion Z (born approx­i­mate­ly from the mid-1990s to the ear­ly 2010s) said they had vol­un­tar­i­ly shared bio­met­ric data with a pri­vate com­pa­ny, com­pared to 29% of over 50s.

“These results should not come as a sur­prise, as a third of mil­len­ni­als and Gen­er­a­tion Z have most prob­a­bly had expe­ri­ence with this type of tech­nol­o­gy, for exam­ple with chat­bots and voice-acti­vat­ed devices such as Siri and Ama­zon Alexa,” says Sonia Navar­rete, senior con­tent ana­lyst at GetApp.

Layering verification

Organ­i­sa­tions are clear­ly reap­ing the rewards of their invest­ments in voice bio­met­rics, par­tic­u­lar­ly banks and finan­cial ser­vices com­pa­nies. How­ev­er, it might be wise to view these sys­tems as part of a broad­er, holis­tic approach to anti-fraud mea­sures.

There are secu­ri­ty lim­i­ta­tions if busi­ness­es focus sole­ly on voice tech­nol­o­gy, says Stephens. How­ev­er, by lay­er­ing in oth­er ver­i­fi­ca­tion require­ments – for exam­ple, behav­iour­al bio­met­rics like loca­tion or the way the per­son uses a mouse – con­sumers can access ser­vices such as online bank­ing just as quick­ly, eas­i­ly and secure­ly.

“This also means that busi­ness­es only hold the infor­ma­tion that is com­plete­ly nec­es­sary, there­by pre­serv­ing pri­va­cy and build­ing trust with cus­tomers.”