Sign In

Can machines ‘learn’ or ‘think’?

The mar­riage of com­put­ing pow­er and data is final­ly bear­ing fruit in the field of cog­ni­tive com­put­ing, some­times called machine learn­ing or, more con­tro­ver­sial­ly, arti­fi­cial intel­li­gence.

In its most every­day form, we see it in tools such as Google Trans­late or Microsoft’s Bing Trans­late, which can trans­late phras­es and doc­u­ments effort­less­ly across mul­ti­ple lan­guages. More futur­is­ti­cal­ly, the promise of self-dri­ving vehi­cles, which can com­plete entire road jour­neys with­out dri­ver inter­ven­tion, is already being realised.

Yet the biggest rev­o­lu­tion in work is hap­pen­ing at some of the most basic lev­els, such as read­ing and dis­sect­ing legal doc­u­ments to extract mean­ing and use­ful infor­ma­tion. The tedious slog of work can be trans­formed by com­put­ers which are able to read and parse legal phras­es, and sum­marise them or enter rel­e­vant details into a data­base or spread­sheet.

Digital tech transforming the way we work

Are these think­ing machines? The idea has fas­ci­nat­ed philoso­phers and tech­nocrats for ages. But with every advance that machines make into space nor­mal­ly thought of as “think­ing”, the goal posts retreat. Until IBM’s Deep Blue defeat­ed then world cham­pi­on Gar­ry Kas­parov in 1997, chess had been thought of as a redoubt for human think­ing.

Built to evolve

More recent­ly, the British com­pa­ny Deep­Mind cre­at­ed a com­put­er pro­gram which can learn to play 1980s arcade games, such as Space Invaders and Break­out, by tri­al and error, based on what it sees on the screen, but with­out being told any rules or giv­en any objec­tive except to max­imise its score. It’s a clas­sic conun­drum: is the Deep­Mind sys­tem “think­ing” or “learn­ing”? Cer­tain­ly, it improves its score, and dis­cov­ers neat ways to play games bet­ter. Google acquired Deep­Mind for £400 mil­lion in 2014.

In ten years, there could be $9 tril­lion of cuts in employ­ment costs as AI sys­tems take over knowl­edge work

Yet the impres­sive feats of trans­la­tion tools don’t indi­cate that the machines behind them can actu­al­ly “think”, nor even under­stand what it is that they are trans­lat­ing. Instead, they rely on a huge resource of data, prin­ci­pal­ly doc­u­ments con­tain­ing the same con­tent, which have been trans­lat­ed simul­ta­ne­ous­ly into mul­ti­ple lan­guages. Pub­li­ca­tions from the Unit­ed Nations and the Euro­pean Union are high­ly favoured, for exam­ple, which may explain why machine trans­la­tions can sound so remark­ably stilt­ed and for­mal.

But to a com­pa­ny using such a trans­la­tion ser­vice, it doesn’t mat­ter whether the com­put­er can “think”, what mat­ters is whether it gets the job done as well or bet­ter than a human. And a grow­ing num­ber of stud­ies sug­gest that more and more jobs are sus­cep­ti­ble. A recent study by the Bank of Amer­i­ca fore­cast that the mar­ket for robots and arti­fi­cial intel­li­gence (AI) solu­tions will be worth $153 bil­lion by 2020, of which AI solu­tions will be worth $70 bil­lion. In ten years, there could be $9 tril­lion of cuts in employ­ment costs as AI sys­tems take over knowl­edge work, as self-dri­ving vehi­cles and drones make $1.9 tril­lion of effi­cien­cy sav­ings com­pared with hav­ing the work done by peo­ple, and robots and AI could boost pro­duc­tiv­i­ty by 30 per cent, while cut­ting man­u­fac­tur­ing costs by between 18 and 33 per cent.

Are jobs at risk?

The broad wave of cog­ni­tive com­put­ing is thus ready to break over the world of employ­ment. But it’s not a sin­gle, sim­ple imple­men­ta­tion. “The area splits into two fields,” explains Andrew Mar­tin, who is study­ing for a PhD in cog­ni­tive com­put­ing at the Tung­sten Cen­tre for Intel­li­gent Data Ana­lyt­ics at the Uni­ver­si­ty of Lon­don. “There are peo­ple try­ing to make more and more com­plex sys­tems with more and more data, hop­ing against hope that the prob­lem will solve itself through big com­plex sys­tems. And the oth­er group is sit­ting back and going to the philo­soph­i­cal draw­ing board try­ing to work out what intel­li­gence actu­al­ly is, and how it emerges.”

So which group is the Tung­sten Cen­tre in? “Sort of both. We’re mak­ing big sys­tems, but aware of the lim­its of what com­put­ers can and can’t do,” says Mr Mar­tin. “We have a view of the things that won’t be solv­able.”

Factors transforming work

Some prob­lems look as though they’re beyond solu­tion by one approach, but that doesn’t mean it can’t be done. In self-dri­ving cars, Mr Mar­tin says, “you have a machine that has to act in very com­plex sit­u­a­tions, but it will nev­er have the full sit­u­a­tion­al aware­ness that a human dri­ver does”.

Yet this sounds like some of the argu­ments that used to be used about chess: a com­put­er could nev­er win at chess, some used to argue, because it wouldn’t be able to under­stand the nuances of cer­tain moves or under­stand ideas such as con­trol of the cen­tre of the board. Those argu­ments went by the board when IBM’s Deep Blue defeat­ed Kas­parov. Being able to do lots of cal­cu­la­tions very quick­ly turned out to be a suf­fi­cient sub­sti­tute for a human’s full sit­u­a­tion­al aware­ness of the chess board.

Indeed, Google’s cars have dri­ven mil­lions of miles in the Unit­ed States and the only acci­dents have been the fault of oth­er, human dri­vers. In fact, a police offi­cer recent­ly flagged down a Google car because its dri­ving seemed over-cau­tious.

Mr Mar­tin says that with cog­ni­tive com­put­ing, “some things are instant­ly solv­able because they’re con­strained – the prob­lems have clear­ly defined lim­its – and some peo­ple might think that solv­ing the quick­est route to some­where isn’t cog­ni­tive com­put­ing”. But that used to be the ambit of taxi dri­vers with huge expe­ri­ence; now it’s avail­able to any­one with a smart­phone.

Man vs machine

So which are the fields that will be most affect­ed by advances in cog­ni­tive com­put­ing? Analy­sis of legal doc­u­ments is a key one. Lon­don-based law firm Berwin Leighton Pais­ner recent­ly made sub­stan­tial time-sav­ings by using such a sys­tem to analyse the con­tent of hun­dreds of Land Reg­istry doc­u­ments auto­mat­i­cal­ly, rather than get­ting the same work done by interns and para­le­gals.

“The real val­ue that you add as a lawyer is about anom­alies,” says Wendy Miller, a part­ner at the firm. “If clients have a huge num­ber of con­tracts and want to under­stand them, it’s use­ful to have these data extrac­tion tools. It’s applic­a­ble to a sur­pris­ing num­ber of tasks and we’re work­ing to put it to work in oth­er areas of law.”

At the Tung­sten Cen­tre, Mr Mar­tin says the areas of work which will be most affect­ed are those which “don’t need much human inspi­ra­tion”. The cen­tre is already study­ing the world of finance.

He points to vehi­cle man­u­fac­ture as one which could eas­i­ly be done by such sys­tems and more pro­saical­ly to super­mar­ket self-ser­vice check­outs. “The road haulage indus­try is at the biggest threat of being seri­ous­ly dis­rupt­ed by AI,” he says, “because motor­ways and motor­way dri­ving are rel­a­tive­ly con­strained envi­ron­ments.”

The impact of cognitive computing

There have already been tests of self-dri­ving trucks in the US, Ger­many, Hol­land and Japan by Daim­ler, Sca­nia, Ford and oth­ers. The poten­tial for employ­ment dis­rup­tion is huge, since there are 3.5 mil­lion pro­fes­sion­al truck dri­vers in the US alone, whose income gen­er­ates sup­port for mil­lions more peo­ple, whether in their fam­i­lies or the truck stops they vis­it as part of their work.

The way to think of cog­ni­tive com­put­ing is that it gives us very fast and obe­di­ent, but extreme­ly stu­pid, slaves

What then will they move on to? How will the world of work be affect­ed? At its core, this is the same ques­tion as that faced by horse and sta­ble own­ers at the end of the 19th cen­tu­ry as motor cars arrived. The assump­tion is that grooms and bri­dle­mak­ers all found new work. But what’s nev­er clear is whether they found bet­ter-paid work or sub­sis­tence. That tends to be the con­cern around the march of the new world of AI, which can also be deployed far faster than the car fac­to­ries of the ear­ly-20th cen­tu­ry could ramp up pro­duc­tion.

Global tech predictions

“The way to think of cog­ni­tive com­put­ing is that it gives us very fast and obe­di­ent, but extreme­ly stu­pid, slaves,” says Mr Mar­tin. “The parts of indus­tries that will remain are those which require knowl­edge.”

But what parts are those? How do we define “knowl­edge” so that we can be sure it won’t be acces­si­ble to a machine-learn­ing sys­tem in five or ten years? Mr Mar­tin says it’s eas­i­er to think of the tasks that will be sus­cep­ti­ble, “things that you can think of as most­ly rule-fol­low­ing and rote behav­iour, repet­i­tive, with no cre­ativ­i­ty, or where there’s only a small amount of inde­pen­dent thought and a lot of peo­ple doing it”.

The con­trast is with fields which require deep knowl­edge and expe­ri­ence, such as the law and med­i­cine. Even though IBM’s Wat­son is being used to analyse scans and data from can­cer patients in a num­ber of hos­pi­tals in the US, the expec­ta­tion is you will still need doc­tors and lawyers to deliv­er the final deci­sions on what to do and where to focus.

CASE STUDY: BERWIN LEIGHTON PAISNER

Case Study

Lon­don-based law firm Berwin Leighton Pais­ner had a very spe­cif­ic chal­lenge: analyse more than 700 Land Reg­istry doc­u­ments for a client, to extract details about land own­er­ship such as the name and address of the over­all own­er, and relat­ed inter­ests such as out­stand­ing mort­gages and oth­er debts tied to them, plus any third-par­ty inter­ests in the title. And the answers had to be 100 per cent accu­rate.

In the past, the only way to do that would be to assem­ble a team of interns and para­le­gals, give them the doc­u­ments and leave them to slog through until they emerged with the answers. Togeth­er with train­ing and nec­es­sary cross-check­ing to make sure that nobody had made any mis­takes, this could con­sume huge amounts of time, as well as being bor­ing.

“I once had to do legal dis­clo­sure check­ing on a huge dis­pute where I was put in a room with doc­u­ments piled to the ceil­ing and told to get on with it,” recalls Wendy Miller, a part­ner at the firm and a lit­i­ga­tor in com­mer­cial real estate dis­putes.

This time the law firm turned to cog­ni­tive com­put­ing, which has begun to rev­o­lu­tionise much of the tedious work in legal analy­sis. The firm had already been look­ing for ways to improve effi­cien­cy. “What we do is very per­son­nel-heavy,” says Ms Miller. Also, the doc­u­ments were like­ly to arrive in near-ran­dom groups, mak­ing resource plan­ning dif­fi­cult. “You don’t want a team sit­ting around doing noth­ing, but it’s tricky if you then find you need 200 doc­u­ments analysed by tomor­row,” she says.

The com­pa­ny turned to a British com­pa­ny which spe­cialis­es in cog­ni­tive com­put­ing sys­tems for infor­ma­tion-inten­sive busi­ness­es. It designed a sys­tem which could scan the PDFs gen­er­at­ed from the Land Reg­istry and gen­er­ate a spread­sheet that could be queried by the law team.

Com­pared with the 45 min­utes it would take a human to exam­ine each doc­u­ment, the RAVN sys­tem has already saved more than 500 work-hours. “The great effi­cien­cy of arti­fi­cial intel­li­gence is that we have com­plete flex­i­bil­i­ty because it’s always there in the back­ground,” says Ms Miller.

So are the peo­ple who would have done that work out of a job? “Extract­ing data from doc­u­ments isn’t per­ceived as valu­able, so we were using junior peo­ple on work that’s hard to charge for,” she says. “Instead, we’ve been able to use those peo­ple on lat­er stages of the project which have more val­ue.”