Political Algorithms

Who guards the guards in an automated, robotic world?


post-856-1282344288
Mad robots – or just a lack of human common sense?

• See Robotics Expert for more info, and for related articles by Chris.
• See Recent Articles for more of Chris’ journalism.

The March 2016 storm over Microsoft’s AI Twitter ‘chatbot’, Tay, was fascinating to behold. The errant package of code learned extreme racism, homophobia, and drug culture from internet trolls and was hastily taken offline.

As one commentator put, it went from saying “humans are super cool” to extolling nazi values in less than 24 hours – a useful analog of extremism’s links with ignorance and peer pressure in our meme-propelled culture.

But were the trolls solely to blame? As journalist Paul Mason noted in his Guardian blog, Tay was essentially feeding off the deep undercurrents of prejudice and hate speech that lurk near the surface of many a social platform.

Or at least they do in the West. Tay’s Chinese counterpart, Microsoft’s XiaoIce, has not faced the same types of problem and has been liaising safely with millions of people online. This suggests a troubling possibility: that AI/machine learning and freedom of speech may be mutually exclusive concepts, unless controls are added to help robots filter out human beings’ basest instincts.

The point is that robots can only understand – or deal with – human intentions if coders understand them, or can anticipate potential problems. In this regard, Microsoft’s Tay should serve as a warning to us all. A naïve robot released by naïve programmers could be a dangerous machine.

Unless robots are programmed to obey Isaac Asimov-style universal laws, they can only learn, or be programmed with, behaviour from flawed human beings within whatever legal frameworks, political beliefs, or cultural norms exist locally.

The political algorithm

What if Trump's beliefs were placed in a robot?
What if Trump’s beliefs were placed in a robot?

This gives us an aphorism for the robot age: All algorithms are political

Algorithms always have political, social, and ethical dimensions; they reflect the values and beliefs of the societies or organisations in which they are written – not to mention the interests of shareholders and whatever outcomes may benefit them the most.

Automation favours the algorithm writer. For example, in Japan’s Nagasaki prefecture, the Henn-na hotel is staffed and run entirely by robots – not as a gimmick, but because robots are cheaper to employ than people. [For more on this, read this separate report.]

A recent survey by consultancy Avanade of 500 C-level executives worldwide found that 77 per cent believe they have not given much thought to the ethical considerations created by smart technologies, robotics, and automation, suggesting that ‘automate first, ask questions later’ is the dominant mindset in the quest to cut costs and/or drive up profits.

Is Google racist?

The risks and socio-political dimensions of even the simplest, most widely used computer algorithm are revealed by the following example:-

Google: an image search inadvertently uncovered networked racism.
Google: a simple image search inadvertently uncovered networked racism.

Tech journalist Derek du Preez recently shared with Facebook friends how he had used Google.co.uk and .com to search for stock images of teenagers to accompany a news story. Keen to reflect diversity, he was shocked to find that searching for “black teenagers” produced a disproportionately high number of pictures of criminals, police suspects, and crime victims, while searching for “white teenagers” produced photo-library shots of smiling youngsters.

In other words, a neutral web search for stock photos instantly uncovered a form of networked racial profiling that panders to deeply racist stereotypes in society at large. 

But is Google’s search algorithm inherently racist? That seems unlikely. Google’s code has probably uncovered something even more troubling: Western society’s collectively racist fear of black youth expressed as a form of confirmation bias, thanks to negative media coverage. (This is why positive stories are important in a networked culture.)

Put simply, the media disproportionately writes about young black males committing crime, and so that cloud of emphasis – prejudice – attaches itself to even the simplest web search. 

Racism is just a click away, even if you don’t search for it, and that creates a feedback loop into society, reinforcing prejudice by providing the apparent evidence to support confirmation bias. That’s no different to Microsoft’s Tay spouting hate speech, it’s just less overt and far more dangerous.

A key challenge in robotics, therefore, is that universal human values are a long way off, given that few societies genuinely share the same beliefs, laws, and equality standards. For example: only two nations have 50 per cent or more representation of women in Parliament – and neither are in the West; homosexuality is illegal in many places; racial and religious prejudice are rife and growing; several nations retain the death penalty; US citizens have a constitutional right to bear arms; Saudi Arabia defines atheists as terrorists and may soon begin executing gay men, and so on.

Which values speak for all humanity, or even for the majority?

Tay has demonstrated that machines can learn hatred by observing and modelling human society. So, in the future we must consider the possibility that some robots may be programmed within institutionally prejudiced organisations, or in countries that have poor human rights records. 

The law-enforcement robot

The risk of automating prejudice, groundless beliefs, and/or cultural misapprehensions is very real in these paranoid and polarised times. [See the attached infographic on the many other factors that can influence bad decisions.]

A superb infographic on the many forms of poor decision-making, from Business Insider.
A superb infographic on the many forms of poor decision-making, from Business Insider.

In the US, young black men are nine times more likely than other Americans to be killed by police, despite making up less than two per cent of the population and being half as likely to be armed as white Americans. Almost 50 per cent of all the US citizens shot dead by police in 2015 came from ethnic minorities. (Figures from The Guardian.)

Also in the US, New York City recently ploughed over $400 million into combatting marijuana use among black and hispanic citizens, despite the fact that it is a far bigger problem among white New Yorkers, who were not targeted at all.

Now factor automation and autonomous robots into law enforcement and you can begin to see what kinds of problems might emerge.

What if a country suppresses the role of women in society, or identifies black or gay men as criminals? What if those beliefs are then automated and placed in a robot? Or what if that robot is programmed to identify a person’s ethnicity or religious beliefs, because that’s what society demands?

The world of ‘Robocop’ is not so far away…

K5 on patrol in California.
K5 on patrol in California.
Dubai police robot.
Dubai police robot.

The United Arab Emirates has one of the most advanced police forces in the world, and it is investing heavily in robots and smart-city capabilities, along with technologies such as Google Glass. At present, its robots are being used mainly in public liaison roles, but ‘full AI’ robots may be on the streets of Dubai within five years, and occupying law-enforcement positions within ten.

Silicon Valley has its robocops too: sensor-packed Knightscope K5s, which resemble a cross between a Dalek and a kitchen appliance. These autonomous machines are already patrolling malls, offices, campuses and local neighbourhoods in the San Francisco Bay area, monitoring traffic and recording unusual behaviour.

What counts as ‘unusual’ in San Francisco is an interesting question; but have the programmers given that any thought? Either way, the local belief is that the K5s’ presence may cut crime by as much as 50 per cent.

In China, the similar-looking AnBot riot-control robot has been programmed to zap people with an electric current if its artificial intelligence determines that they constitute a threat – the first obvious example of a machine being programmed to harm human beings at its own volition.

In 2015 in the US, tasers were linked with 48 deaths (as a possible but not definitive cause), so the AnBot has the potential to be the first autonomous police robot* to kill a human being. Never has the political algorithm been such a terrifying prospect.

[*: In July 2016, police officers in Dallas used a non-autonomous robot to take the life of armed murder suspect Micah Johnson after the deaths of several police officers, during a protest against the rise of police shootings of black Americans in the US.]

The dark side of surveillance

spying-privacy-peeping-tom-peeping-through-keyhole-o
The government thinks you may be a bad person.

Another aspect of law enforcement is growing in prominence in the UK, the US, and elsewhere: surveillance, as the UK’s revised Investigatory Powers Bill and the FBI’s tussles with Apple reveal.

Sooner or later, someone in a back room will be asked to write an algorithm to identify ‘bad behaviour’ from the mass of context-free data gathered by ISPs and comms suppliers.

At that point, the political algorithm will begin to have a massive impact on Western society, and the simple quest for knowledge about some subjects may attract suspicion (as may any citizen’s desire for privacy).

Who writes those algorithms, based on what values, what beliefs, what targets, what desired outcomes, and what concepts of ‘bad’ or ‘suspicious’ behaviour is something to which no one has given serious consideration. We must address these issues urgently, or society may pay a terrible price in the decades to come.

Algorithms are deeply connected with human societies, frailties, and misconceptions in a way that few people – including coders – understand or take seriously enough. And it is into this content that machine learning, AI, and robots are emerging. The risks are all too clear, as the emergence of lethal police robots, and even Microsoft’s racist chatbot, demonstrate.

Q: Who could have predicted that an AI chatbot on Twitter might be hijacked by trolls? A: Anyone with an ounce of common sense. So why didn’t Microsoft see that, or anticipate such problems? That much is a mystery; but it is certainly considering them now.

An earlier version of this article was first published on diginomica.

Also see:
Rise of the Robots (in-depth robotics report)
Digital Dystopia

.chrism

CMLogoSMALLEnquiries
07986 009109
chris@chrismiddleton.company

RSS
Follow by Email
Facebook
Google+
http://chrismiddleton.company/the-political-algorithm/
SHARE
Pinterest
Pinterest
LinkedIn

© Chris Middleton 2016