We Are the Robots

Chris Middleton looks at the dangerous rise of machine decision-making.


Human barcode

The robots aren’t coming, they’re here. The bad news is they’re us – unless we urgently write new rules that benefit all of human society. Chris Middleton presents a 3,000-word personal report on the ill-considered rise of automation and machine-based decision-making.

Technology is the first thing blamed by customers who are struggling to get banks, utilities, local councils, insurance providers, telcos, and other organisations to understand them.

But the real problems are the rules, the policies, that some organisations write before paying a technology company to cast them into software as digital ‘statues’ of their belief system.

Combined with poor or incomplete data in a networked, highly automated world, the output of this ‘read only’ process, as interpreted by machines can be a dystopian nightmare for any customer or citizen who doesn’t fit in.

Later on, I’ll introduce you to just such a man. His story will shock you.

And because the employees in many organisations obey the same rules that the software does – often by reading them aloud to customers from a computer screen – awkward humans can be made to disappear*.

The origin of the word ‘robot’ is the Czech word robota, meaning ‘forced labour’ – coined into popular usage in Karel Čapek’s 1920 play ‘Rossum’s Universal Robots’.

But while most people focus on the ethical aspects of robots resembling and replacing human beings – as the False Maria robot did in Fritz Lang’s 1929 film ‘Metropolis’ – few people consider the flip side of that process: human beings behaving more and more like robots in a world of repetitive, highly regulated processes. That, I would argue, is where the real danger lies in an increasingly automated world. 

Let me explain.

Rise of the robots

Recently, I had the privilege of taking part in a robotics workshop for schoolchildren – thanks to my ownership of a real humanoid robot. Robotics, coding, AI, and even computer ethics are things that many of today’s primary school kids are being taught. Indeed, the UK government says that every child in that age group must know what algorithms are: the sequential, operational steps behind every piece of software. Turn left, turn right, switch on, switch off.

The good teachers who run these workshops have risen to the challenge by sharing a catchy ditty, ‘The Algorithm Song’, and getting their youngest pupils to sing it in class. By drumming the word into them – via the tune of a recent hit single – the hope is that these kids will leave primary education understanding the importance of algorithms in their daily lives.

But as I will explain, we’re all singing ‘The Algorithm Song’ every day, and it’s not always to such a happy tune. And our fascination with robots is really just a minor-key refrain in what is already the soundtrack to this century: the rise of automation and machine-based decision-making – sometimes based on increasingly antisocial rules.

cca-prison-barsBut let’s stay with that robot refrain for a moment. At a Cloud Week event in Paris in July 2015, Fujitsu’s platform chief, Chiseki Sagawa, predicted that by 2025, humanoid robots will be commonplace in homes and offices. Sagawa is biased – machine-men have long been part of the cultural narrative in his home country, Japan – but that’s not to say that he’s wrong. In the same month, the first hotel staffed entirely by robots, Henn-na, opened in Nagasaki. [For a free, 5,500-word special report on the rise of humanoid robotics, go here.]

Gimmick or not, the stated aim of the venture isn’t to increase the sum of human happiness; it’s to reduce HR costs and increase efficiency, and thus make the hotel’s shareholders wealthier. We could describe that as the core rule of the system, its basic condition. On top of that rule are written the algorithms that make it real – step-by-step operational instructions that each robot follows in pursuit of a pre-defined outcome.

So essentially, the hotel’s human guests are entering a money-making machine that also offers them a place to sleep. And as the Internet of Things grows, highly automated environments will proliferate, removing job after job from the human employment market. In the future, 47 per cent of all jobs will be automated, according to Dr Anders Sandberg, James Martin research Fellow at Oxford University’s Future of Humanity Institute.

(Any job that can be defined is a job that can be automated. Again, this is explored in more detail in this special report, which features contributions from the world’s leading robotics experts.)

However, the Japanese obsession with manufacturing synthetic copies of themselves is really a piece of epic misdirection. Creating a mechanical device that can walk, talk, and resemble a human is just a physical engineering challenge; it has little to do with the programming, intelligence, or intention behind the scenes.

Robots don’t need a human face, or any face at all. They’re already embedded in the machinery of Western society.

Automation is not a bad thing; it was the core principle of the Industrial Revolution. In a networked, big data-driven world, the real issues today are the algorithms, along with the data that those algorithms rely on to codify business directives and create desired, predestined outcomes. In that environment, systems increasingly rely on systems that rely on other systems.

But what if the original algorithms behind an automation programme are based on a bad idea? Let me explain.

Who tells the tellers?

telemarketrobot-300x225International banks are in the vanguard of strategic automation – investment platforms and customer service operations are but two key areas. But it’s easy to automate a bank’s customer-facing processes, because most are already entirely controlled by software.

Anyone who’s walked into a branch of their bank this century is well aware that today’s Financial Services companies are now, quite literally, software programmes fronted by human beings. Those employees – many excellent, professional, committed people among them – follow step-by-step instructions in any scenario, and are forbidden to depart from them.

To go into a branch to open an account, seek investment advice, take out a mortgage, or request a credit card, is to be talked through a preset series of slides by a human being. It’s purely a matter of rules and algorithms, of you satisfying preset conditions. Interrupt that flow to ask a question and you’re told, “I’ll just read this next slide to you.” The bank’s employees have no choice but to obey this step-by-step process and tick those boxes.

In a sense, it’s insulting to everyone involved to even attempt to give this a human face. Whatever qualities banks’ employees might have as imaginative, talented, skilled, or highly qualified individuals, they may as well be robots, like the ones in that Japanese hotel.

Today’s banks are really giant compliance statements that sit behind a set of automated processes, which are designed to maximise remote shareholder profit, not public service. They’ve become sets of increasingly uncompromising algorithms, to which everyone – employees and customers – must comply.

And if you don’t satisfy shareholders’ conditions you can be deleted from society. I’ll prove this to you in a moment.

That’s why smartphone-only accounts are becoming popular; they’re more convenient and remove all human pretence and ambiguity. But their popularity also reveals that we’re beginning to prefer machine-based decision-making to human interaction. Or rather, to prefer using the software ourselves to having a human being read the instructions to us, like a robot.

Now, this is all very well when the data that those systems rely on is accurate, and if the algorithms that drive them have been conceived to increase the sum of human happiness. But what if the data feeding those systems is wrong? And what if the data-gathering process is also the result of flawed algorithms that obey equally flawed policies? And what if the underlying rules on which any one of these systems is based are no longer in customers’ – or even society’s – best interests?

Also at Cloud Week 2015 in Paris, Constellation Research’s Ray Wang talked of a Digital Bill of Rights, which, among other things, would protect people’s right to opt out of digital systems and prevent them being oppressed by their own data. That’s great, but it misses two key points. First, it’s increasingly difficult to opt out of digital society, unless you opt out of basic services too – including pensions, benefits, and tax. And second, it’s often not ‘our’ data that oppresses us, but someone else’s.

Ladies and gentlemen: meet one of ‘The Disappeared’.

When algorithms attack

spying-privacy-peeping-tom-peeping-through-keyhole-oFive years ago, someone I know moved half a mile down the road from one apartment to a bigger one, in the same town where he’d been living for a decade. A week later, he received a threatening letter from a council 100 miles away. It told him that he owed nearly £1,200 in unpaid Council Tax for a property he’d never lived in. The letter was genuine, the automated result of a computer algorithm; no human being was involved.

Fearing identity theft, my friend phoned the council involved. They informed him that if he didn’t pay, they’d take steps to recover the money, such as by seizing property from his new address. He explained that he’d never even visited their town, let alone lived there, and so couldn’t possibly owe them the money.

They told him that someone with the same (very common) name as him had moved out of that address and disappeared, owing back-taxes. A national database flagged my friend as having moved at roughly the same time (actually many weeks later) and so, they said, he must be their absconding debtor. At that point, two completely different people, linked only by a very common name, became one person in the databases that govern our lives.

My friend’s age, blemish-free tax records, National Insurance number, former address just streets away, and good credit history were all irrelevant to this particular system – thanks to an algorithm and a poorly designed rule that said “recover money quick, by any means necessary”. Once that instruction kicked in, machine-based decisions and algorithms set about dismantling my friend’s financial reputation, piece by piece.

Like anyone who’s found themselves trapped by poor data and/or absurd algorithms, his presumed and automated guilt placed all the onus on him to extricate himself from the mess. And, thanks to another algorithm, the machine-based judgement generated by the flawed system had already been logged with the credit reference agencies. 

So now it was official: he was a tax-evader who had no right of appeal against his machine-generated ‘sentence’.

Banking on disaster

At the same time, my friend was trying to open a bank account for a company he’d just set up – a legal requirement for any new venture. At that time, he had a good income; no debts; no criminal record; a five-figure sum in his personal bank account, which he wanted to invest; and a 50 per cent stake in a large, valuable property, having just sold his own and moved in with his partner. He also had no credit cards or store cards, preferring to buy only what he could afford (it seems almost quaint, doesn’t it?).

To people of his parents’ and grandparents’ generations, my friend would have been seen as an upstanding citizen, a model of financial probity and common sense. But the core rule on which the Financial Services sector used to be based – debt bad, savings good – has been turned on its head: debt is now good – for the shareholders that own it – as long as the debtors don’t abscond. Today, the debtor might be an entire country, of course, such as Greece, or one of those students who little more than grist to the UK’s automated financial mill.

This simple, but absurd, rule now underpins most Western economies. Algorithms are written based on it, and employees and customers must comply, as otherwise companies’ remote investors make less money. Everyone loses except the shareholders, and if the system fails we bail out the shareholders so we can lose all over again. We know this to be true.

The conclusion is obvious: automation favours the algorithm writer, because it’s based on rules that create favourable outcomes for them.

imagesBut back to my unfortunate friend. According to the credit reference agencies (which are universally seen as holding accurate, benchmark data), and according to every bank in the UK (all of which are now machine-based compliance systems) my friend was not only an undesirable customer who was incapable of managing his own affairs, (no credit or store cards, you see…), but also now an absconding debtor who had defrauded the taxpayer.

To an intelligent human being, he was none of these things; quite the reverse. But he was to a machine. As a result, every bank in the UK refused to open an account for the new company he’d set up. He wasn’t asking to borrow money, just to have a ‘vanilla’ account so that his clients could pay him.

Let’s restate the case: an honest man with no criminal record, a very good income, prospects, clients, zero debt, £45k in the bank, and property, was refused a simple bank account with no credit facilities, thanks to an algorithm. Such a man stood less chance of opening a bank account account than a hardened criminal re-entering society.

And the more he was refused their services, the more he was logged on databases as having been refused their services: a vicious, downward spiral of bad data feeding into more bad data, while denying human beings any opportunity to intervene.

A nightmare of cyclical compliance.

Of course (you say), he could simply have contacted the credit reference agencies and paid them to correct his records. He did, but it’s not that simple, because the onus was on him to prove the system wrong. His problems persisted for months and they linger to this day, five years later. Why? Because errors spread like a virus in a networked system.

Healing ‘patient zero’ in a case of bad data and inept algorithms doesn’t reverse an epidemic, because with networked, digital systems you have to cure an infinitely recurring number of patient zeroes. More, you could argue that any credit reference system that charges human beings money to correct its mistakes has zero incentive to be accurate.

Our credit reference and ratings system is a deeply flawed, self-serving monster, one born of a truly insane idea: the more in debt you are, the better you are at managing your finances. That idea is crippling Western society – as the 2008-09 crash proved. 

One of the disappeared

In the end, my friend was forced to close his company, which could have been an asset to the community. Having no bank account meant that he couldn’t trade; it was as simple as that. Winding up the company cost him money and lost him income, and he was forced to make other decisions that lost him more. Today, he’s nearly penniless, and wholly reliant on the debit card that goes with his 20-year-old personal bank account.

Five years later, he’s still regularly denied credit and can’t even be approved for a store card – and, of course, each refusal is added to his record. Recently, his bank told him that they’re shutting down the type of account he uses, which will force him to apply for a new one – a process mandated by another algorithm. What then?

At the time his troubles started, one of the high street bank managers he spoke to had the good grace to explain the problem, even as my friend was showing him proof after proof of his then-excellent financial status. The manager said:

“If it was up to me, sir, I’d say yes to opening the new account. But the computer won’t let me.” As clear a statement of human irrelevance to the Banking sector as you could wish to hear.

Like every employee in every bank in the Western world today, that manager’s ability to act outside of the system’s machine-based rules was no better than that of a robot. Granted, the manager had emotions, empathy, intelligence, and all the other traits that make us human, but he was no more capable than a robot of acting independently – even if reason and common sense told him to.

He had to comply.

The snooper’s nightmare

Now picture a world in which this type of machine-based, or machine-like, decision-making scales up to become the dominant force in an increasingly data- and compliance-based, society.

It’s not a hypothetical scenario. A few years ago, another friend of mine – middle-aged, middle-class, white, and smartly dressed – took a photograph of a female friend at the coffee stand on Brighton railway station, because she was smiling and looked beautiful. But he happened to do this on the day that the then Prime Minister was in another part of town, speaking at a conference.

robot-hand-with-human-handBecause taking a photograph in a public space was seen as a security threat by a targets-driven police officer, who had to provide evidence to his boss of enforcing directives, my friend was detained and questioned under the Prevention of Terrorism Act and his name placed on a database of terror suspects.

Incredible as it may seem, one photograph and a cup of coffee was all it took to for an innocent man to be targeted as a potential enemy of society. For the next two years, he claims, he was periodically followed, and one night found three people hiding outside his flat in the dark attempting to record his conversations with a radio microphone. I know the latter story to be true.

But let’s fast-forward to the present and consider the UK government’s data-retention rules (recently found to contravene European law) and the ‘Snooper’s Charter’ (aka the revised Investigatory Powers Bill). Initiatives like these are the next logical steps for any government that places inordinate faith in context-free data’s ability to reveal simple truths about complex human beings – even if Whitehall’s track record of creating large IT systems to process citizens’ data is shocking. Its record of securing that data is equally poor.

We live in a dangerous world; no one denies it. But these data-retention and snooping plans mandate the creation of yet more algorithms and machine-based systems to root out supposed bad behaviour, because it’s obvious that human beings won’t be able to monitor all of our communications 24 hours a day.

And that begs the questions: Who will write these algorithms, and why? Based on what rules, what beliefs, and what pre-supposed evidence of wrongdoing? And what key words might they search for in our once-private communications?

For example, could any such system tell whether a device is being used, at any one time, by an adult or by a minor? No. And as anyone under 18 has a right to privacy under the UN Convention on the Rights of the Child – the most widely upheld statement of human rights in history – the UK government would have no choice but to create a national security exemption that allows them to intercept children’s communications.

The underlying point is that mass surveillance can only be a blunt instrument that ignores  context and intent. In July 2015, whistleblower and fugitive Edward Snowden said that DNS queries should be encrypted as well as content, so that encryption, rather than surveillance, becomes the norm. “People are being killed based on metadata,” he said. 

I don’t doubt it. Do you?

So what of the programme’s stated aims? Among other things, it’s claimed that by monitoring everyone’s communications the Snooper’s Charter will protect children from abusers and pornographers: an aim that everyone supports.

Except it won’t; it will criminalise young people. Why? Because the vast majority of explicit images of minors are made and disseminated by minors. Teenagers regularly send explicit images of themselves to each other (so-called ‘sexting’). If they’re under 18, that makes them criminals – even if they’re over 16 and in consenting relationships.

The figures are alarming. Several reports claim that 25 per cent or more of all teenagers ‘sext’ (almost certainly an underestimate), with 80 per cent of them being under 18.

So let’s do the maths. There are five million teenagers in the UK, according to government statistics. If up to 25 per cent of them are ‘sexting’, as surveys say, that’s 1.25 million young people. If 80 per cent of those involved are under 18, then one million children are breaking the law and are at risk of prosecution.

Scare-mongering? No. Rising numbers of young people in legal, consenting relationships are already on the offenders register for sending pictures of themselves to their consenting partners. In America, a teenager was recently found to be both the perpetrator and the victim of this crime, for having a picture of himself on his own mobile.

In time, therefore, it’s conceivable that more children will be regarded as criminals than the predatory abusers the system is designed to catch.

In recent years, child protection workers have stopped using the term ‘digital footprint’ to describe the traces we leave behind online, and have begun using ‘digital tattoo’ instead: something permanent, impossible to erase.

Charities, care workers, teachers, and security professionals in the field of child internet safety all tend to agree on one thing: young adults should have the right to permanently erase any data that was created about them when they were under 18, to prevent any youthful mistakes damaging their lives as prospective employers scrape their social media profiles.

Sir Tim Berners-Lee has proposed a better alternative: preventing employers from taking such data into account. But the Snooper’s Charter makes all of this irrelevant.

Now, let’s suppose that such a machine-based surveillance system is also designed to look for evidence of extremist religious views, for example, via trigger words or phrases. Could such an automated, crude (and itself ideology-based) process distinguish between a terrorist using a keyword and, say, a historian, a student, a journalist, an economist, a researcher, an international affairs worker, a theologian, a priest, or simply someone who’s reading a book?

The experience of my unfortunate friend with the banking problems, above, suggests that the answer to that question is clear: No.

Not convinced? Then consider this: all the databases and computer systems that govern banks, the public sector, utilities, and countless other industries have been built and tested over decades. They interface with each other and with the credit reference agencies – those supposed beacons of truth – and yet even today are unable to distinguish between an innocent man and an absconding debtor. 

After all, in my friend’s case, the combined findings of their supposed intelligence, and of the billions of dollars of IT investment behind them, amounted to “has the same [very common] name” and “has moved this year”. That was all it took to destroy his financial reputation – and perhaps the lives of others who share the same name.

So: altogether now: let’s join the primary school kids in singing ‘The Algorithm Song’ that I shared earlier in this report. Because as we face the rise of machine-based decision-making in the institutions that govern our lives, we are all helpless infants.

Conclusions

  • Automation is not intrinsically bad: it makes many industries and processes more efficient and cost-effective. However, too little thought is being given to the thinking behind the automation, to the strategies and the algorithms that represent them. In many cases human beings are the least important element in some organisations’ plans.
  • The politician Michael Howard couldn’t have been more wrong when he said, years ago, that “the innocent have nothing to fear”. The innocent have plenty to fear – from machine-based decision-making, at least. Especially when it’s placed into the hands of ideologs, bureaucrats, and offshore investors.
  • While a Digital Bill of Rights might protect people should they wish to opt out of digital systems, a Citizens’ Right to Appeal against automated decisions, based on faulty data, is needed when they have no choice but to opt in. Such a national appeal should override the interests of any organisation that denies them basic services. Human society must step in and help.
  • More, organisations should stop using technology to turn their employees into robots by preventing them from using their own intuition, judgement, intelligence, empathy, and skill. They’re human beings, serving other human beings, not robots serving robots.
  • And finally, it stands to reason that the Snooper’s Charter and related proposals in the UK will be used to deny many innocent people a place in society, based purely on mistakenly applied, context-free machine logic. As a society that used to value decency, fairness, and tolerance, we must oppose it.

*: An irony of the robotics age is that, while human beings in customer-facing roles are increasingly being forced to read aloud from pre-written scripts, robots in similar roles are being programmed to have natural language conversations, as this story explains.

• A different version of this article was originally published on diginomica.com as three linked reports, ‘We are the robots’, ‘When algorithms attack’, and ‘The snooper’s nightmare’.

CMLogoSMALLEnquiries
07986 009109
chris@chrismiddleton.company

RSS
Follow by Email
Facebook
Google+
http://chrismiddleton.company/digital-dystopia/
SHARE
Pinterest
Pinterest
LinkedIn

© Chris Middleton 2015, 2016