Sign In

Personal data may penalise ‘uninsurables’

Motor, home and health insur­ers have nev­er had more access to data about their cus­tomers. Thanks to the inter­net of things (IoT) there is a grow­ing abil­i­ty to assess how we dri­ve, live our lives, pro­tect our assets and price us accord­ing to our own indi­vid­ual risk pro­file. There is also an oppor­tu­ni­ty to offer feed­back and incen­tives to insur­ance cus­tomers to encour­age bet­ter behav­iour, there­by reduc­ing claims and enabling insur­ers to offer dis­count­ed rates to less risky cus­tomers.

How­ev­er, by shar­ing this infor­ma­tion about our­selves, could some con­sumers be penalised for things that are beyond their con­trol, their age for instance, genet­ic pre­dis­po­si­tion to dis­ease or even per­son­al­i­ty? Thanks to the Euro­pean Union’s Gen­der Direc­tive in 2012, insur­ers can no longer charge men and women a dif­fer­ent price, even if this is based on actu­ar­i­al­ly sound ana­lyt­ics. But there are many oth­er per­son­al attrib­ut­es that could be used to deter­mine what indi­vid­u­als are charged for their insur­ance.

When Admi­ral was forced to with­draw its plans to part­ner with Face­book late last year after the social media com­pa­ny said the scheme would breach its pri­va­cy rules, it raised an impor­tant moral ques­tion. Just because an insur­ance com­pa­ny has the abil­i­ty to analyse cus­tomers’ use of social media and to use that infor­ma­tion as a rat­ing fac­tor doesn’t nec­es­sar­i­ly mean they should.

There are eth­i­cal issues here and a need for strong reg­u­la­tion in place,” says Nico­las Michel­lod, a senior ana­lyst in Celent’s insur­ance prac­tice. “We car­ried out some research last year ask­ing con­sumers what they think about insur­ance com­pa­nies using their data on social net­works to pro­vide new prod­ucts or to track them for claims fraud. It’s clear there’s a big gap between what insur­ance com­pa­nies think they should be allowed to do and what the con­sumers think.”

As a soci­ety we shouldn’t be look­ing to exploit peo­ple who are naive and vul­ner­a­ble

Data has always been a com­mod­i­ty in the insur­ance busi­ness, allow­ing under­writ­ers to assess and price risk accu­rate­ly, from motor insur­ance through to major com­mer­cial oper­a­tions. But as insur­ers mine data from an increas­ing­ly vast range of sources, accel­er­at­ing the need for arti­fi­cial intel­li­gence to cre­ate mean­ing from it, they will inevitably gain access to more and more infor­ma­tion about their cus­tomers.

Reg­u­la­tors are watch­ing care­ful­ly as insur­ers tap con­sumer data. While the Finan­cial Con­duct Author­i­ty took the deci­sion to drop its probe into insur­ers’ use of big data last year, acknowl­edg­ing that it did not want to hin­der indus­try inno­va­tion and not­ing the use of infor­ma­tion about con­sumer behav­iour was “broad­ly pos­i­tive”, it also not­ed there could be “some risks to con­sumer out­comes” with some indi­vid­u­als find­ing it hard­er to access afford­able cov­er.

In per­son­al lives, the con­nect­ed car and home, and wear­able devices promise increas­ing­ly greater lev­els of insight into a customer’s lifestyle, behav­iour and cir­cum­stances. How­ev­er, their insur­ance com­pa­nies should use this infor­ma­tion respon­si­bly, says Andrew Brem, chief dig­i­tal offi­cer at Avi­va, par­tic­u­lar­ly as pric­ing becomes more tai­lored to each indi­vid­ual rather than pooled across the entire mar­ket­place.

“Social, pub­lic, IoT, genet­ic and oth­er new forms of data might indeed reveal much greater vari­ance in risk that we can mea­sure today, and the nat­ur­al impli­ca­tion would be greater extremes of pric­ing,” he explains. “This could raise sig­nif­i­cant ques­tions about fair­ness in soci­ety, and the insur­ance indus­try will need to work with gov­ern­ments and reg­u­la­tors to agree what fac­tors soci­ety feels we should, and should not, take into account when pric­ing risk.

“This is a rapid­ly evolv­ing area and we con­tin­u­al­ly chal­lenge our­selves as to whether our cus­tomers would con­sid­er use of these data sources accept­able. Insur­ance has a social role to play in help­ing those in dan­ger of falling out from insur­ance, so we’ll need to look at solu­tions for every­one.”

 Telem­at­ics insur­ance prod­ucts are hailed as one exam­ple of the indus­try inno­vat­ing and lever­ag­ing data to cater to the needs of a group of dis­ad­van­taged con­sumers. In return for hav­ing their dri­ving behav­iour mon­i­tored, younger dri­vers are able to access more afford­able motor insur­ance.

For 18 to 20 year olds, who pay an aver­age of £972 a year for their cov­er, com­pared with an aver­age of £367 for oth­er dri­vers, accord­ing to the RAC, this can make all the dif­fer­ence. Six­ty two per cent of young dri­vers see insur­ance as the biggest bar­ri­er to own­ing and run­ning a car.

Not only does telem­at­ics pro­vide young dri­vers with access to more afford­able cov­er, it is a pop­u­lar exam­ple of how a feed­back loop can improve the under­ly­ing risk. “Our insur­er clients find that only a tiny frac­tion of cus­tomers – typ­i­cal­ly less than 3 per cent – don’t respond to feed­back on their dri­ving behav­iour,” says Selim Cavanagh, man­ag­ing direc­tor of Wunel­li and vice-pres­i­dent of Lex­is­Nex­is. “So there’s a mas­sive soci­etal ben­e­fit, and it also helps the indi­vid­ual pay less and still be mobile. Feed­back is a real­ly pow­er­ful tool.

“With telem­at­ics, younger dri­vers pay around 41 per cent less for their insur­ance because telem­at­ics tells them they are being mon­i­tored, and most young peo­ple are will­ing to lis­ten to feed­back and mod­i­fy their behav­iour.”

There are clear­ly ben­e­fits to be gained at all lev­els when con­sumers opt to share their per­son­al infor­ma­tion with insur­ers and when this infor­ma­tion is used to reduce risky behav­iour. How­ev­er, while cer­tain rat­ing fac­tors, such as dri­ving style, are with­in cus­tomers’ con­trol, there is very lit­tle that can be done to alter or improve oth­er fac­tors insur­ers could use to price cov­er.

In the Unit­ed States, there has been a back­lash against bio­met­ric screen­ing as part of cor­po­rate well­ness pro­grammes. Cer­tain­ly there is a feel­ing that it is moral­ly wrong to deny health insur­ance to indi­vid­u­als because of their pre­dis­po­si­tion to cer­tain dis­eases. “Big data and the IoT can be used by the insur­ance indus­try as a force for good or a force for bad,” says Mark Williamson, a part­ner at law firm Clyde & Co. “It’s about hav­ing the right checks and bal­ances in place.

“One of the big con­cerns is a moral argu­ment around using uncon­trol­lable rat­ing fac­tors. Should an indi­vid­ual be penalised for their genet­ic make-up when there’s noth­ing they can do about it? These uncon­trol­lable fac­tors might be things indi­vid­u­als are not hap­py to share or that they may not even know about them­selves. Is that moral­ly cor­rect?”

He thinks these issues are now a mat­ter of pub­lic pol­i­cy, with the onus on the indus­try, gov­ern­ments and reg­u­la­tors to keep up with devel­op­ments. It is antic­i­pat­ed the EU’s Gen­er­al Data Pro­tec­tion Reg­u­la­tion could be instru­men­tal in deter­min­ing how insur­ers should and shouldn’t be allowed to use cus­tomers’ data mov­ing for­ward.

“As a soci­ety we shouldn’t be look­ing to exploit peo­ple who are naive and vul­ner­a­ble,” Mr Williamson con­cludes. “And if by drilling down to an indi­vid­ual lev­el, insur­ers are dis­crim­i­nat­ing against these indi­vid­u­als, is it right for a sec­tor found­ed on the con­cept of pool­ing of indi­vid­ual risks to be doing that or is the gov­ern­ment and reg­u­la­tor going to have to do some­thing about that?”