Chris Middleton hears from the world’s top robotics experts
In this two-part, 5,000-word special report on the global rise of humanoid robots, AI, and automation, Chris Middleton hears from some of the world’s leading robotics experts. In Part 1, below, he explores the ethical implications of mass automation and AI, and the influx of humanoid machines to our societies. In Part 2, beneath, he looks at further applications, and at the position of the UK in this emerging economy.
Part 1: Half of all jobs will be taken by robots, claims scientist
If your job can be easily defined, then it can – and will – be automated. That was the stark message from the Japan-UK Robotics and AI seminar, which took place at the Japanese Embassy in London in February, 2016. The event brought together experts to share knowledge and foster collaboration between the two countries.
So, if it is true that a job’s ‘definability’ is the same as its ability to be automated, then robotic software, hardware robots, and AI pose a much greater challenge to human society than most people realise. It won’t just be the robotic ATM, checkout, or ticket kiosk serving you, but also robot nurses, doctors, care assistants, lawyers, schoolteachers, and more: all roles that currently require human intuition, intelligence, and empathy.
That robots might take on simple rules-based, repetitive, or low-skilled roles is not a new idea, but in a world of escalating compliance, which of today’s jobs isn’t based on rules, targets, and spreadsheets – algorithms in all but name? And with autonomous cars, delivery trucks, and drones thrown into the mix, it’s clear that more and more tasks will cease to be carried out by human beings.
The origin of the word ‘robot’ is the Czech word robota, meaning ‘forced labour’ – coined into popular usage in Karel Čapek’s 1920 play ‘Rossum’s Universal Robots’. But while some people focus on the ethical aspects of robots resembling and replacing human beings – as the False Maria robot did in Fritz Lang’s 1929 film ‘Metropolis’ – few people consider the flip side of that process: human beings behaving more like machines in a world of repetitive regulated processes. That, I would argue, is where some of the danger lies in an increasingly automated world.
The risk of social upheaval is one reason why robotics research is increasingly taking place in multi-disciplinary teams: not only of computer scientists and engineers, but also of psychologists, cultural theorists, ethics experts, and cognitive researchers. Robotics is no longer just about scaling a great technology Everest just because it’s there.
Living with robots
Dr Anders Sandberg is James Martin Research Fellow at Oxford University’s Future of Humanity Institute. He believes that in the future nearly half of all jobs (47 per cent) will be automated, with those that can be easily described being the easiest to hand over to machines.
If his prediction sounds far-fetched, then consider this: robotic production lines have been with us for decades, while software robots, touchscreen interfaces, kiosks, and more, have already replaced supermarket tills, ticket offices, reception desks, hotel check-ins, bank counters, and security gates. In some large retailers, people are employed to ensure that the self-service tills are working properly, effectively making human beings subservient to machines.
Pepper, the emotion-sensing humanoid made by French company Aldebaran Robotics (now 95 per cent owned by Japan’s SoftBank), is already employed in shops and cafés in several parts of the world, including in SoftBank stores and some Carrefour supermarkets. [Here’s a 2015 interview with an English-speaking version of the robot.]
Several international banks plan mass software automation – HSBC and Bank of America’s investment arm Merrill Edge, for example. RBS is replacing hundreds of face-to-face customer service staff with automated services, which it says frees up its remaining human agents to be expert advisors. But there’s a caveat: those advisors will only be available to its wealthiest clients, making face-to-face human service into a luxury item for those that can afford it.
Meanwhile, Bloomberg and Associated Press are pursuing automation in the production of journalism and other editorial products and services: automated news, generated by robots. That material will be based on press releases and trending topics. In this way, journalism may become indistinguishable from marketing collateral.
Welcome to a world of machine -generated, machine-readable code, published and read by other machines. In both cases, the replacement of human beings is seen a competitive differentiator in a globalised, cutthroat market.
But anyone seeking legal advice about their employment prospects should be equally wary: the world’s first AI lawyer is already winning business. Elsewhere, Google’s AI has started writing poetry, but at least it’s generating accidental page views for the late human poet, Ai Ogawa.
The moral robot
Mass unemployment and social divisions aside, Oxford University’s Sandberg acknowledged that there are other ethical problems with the idea of robots taking jobs from humans, not to mention technical obstacles. Among these is the challenge of establishing robot-machine empathy in a world in which “the network doesn’t care”, he said, and we can barely explain shared human values to each other, let alone to machines.
Sandberg said: “Robots have to navigate a human-shaped world and understand human intentions. It’s easier to make an efficient robot than a [morally] good robot. A law-abiding robot is not the same as an ethical robot. Intelligence and values are not the same thing.”
His point was underlined in March 2016 by the storm over Microsoft’s chatbot, Tay, which learned racism, homophobia, and drug culture from Twitter trolls and was hastily taken offline. As one commentator put, it went from saying “humans are super cool” to extolling nazi values in less than 24 hours – a useful analog of extremism’s links with ignorance and peer pressure in our meme-propelled culture.
But were the trolls really to blame? As journalist Paul Mason noted in his Guardian blog, Tay was feeding off the deep undercurrents of prejudice and hate speech that lurk near the surface of many social platforms. Or at least they do in the West. Tay’s Chinese AI counterpart, Microsoft’s XiaoIce, has not faced the same problems and has been liaising safely with millions of people online. This suggests a troubling possibility: that AI/machine learning and freedom of speech may be mutually exclusive concepts.
The politics of algorithms
In Japan’s Nagasaki prefecture, the Henn-na hotel is staffed and run entirely by robots – not as a gimmick, but because robots are cheaper to employ than people. This gives us a useful aphorism: algorithms favour the algorithm writer, which in this case is the hotel’s shareholders.
Not all shareholders have local taxpayers’ interests at heart: a recent survey by consultancy Avanade of 500 C-level executives worldwide found that 77 per cent believe they have not given much thought to the ethical considerations created by smart technologies, robotics, and automation, suggesting that ‘automate first, ask questions later’ is the dominant mindset in any quest to cut costs and drive up profits.
Tay demonstrated that machines learn by observing and modelling human society. Unless robots are programmed to obey Isaac Asimov-style universal laws, they can only learn, or be programmed with, behaviour from flawed human beings within whatever legal frameworks, political beliefs, or cultural norms exist locally. This gives us another useful aphorism: all algorithms are political. They reflect the values and beliefs of the societies or organisations in which they are written.
In the future, this nexus of issues may be of particular relevance in law enforcement; the world of ‘Robocop’ is not so far away.
The United Arab Emirates has one of the most advanced police forces in the world, and it is investing heavily in robots and smart-city capabilities, along with technologies such as Google Glass. At present, its robots are being used in public liaison roles, but more advanced machines may be on the streets of Dubai within five years, and occupying law-enforcement positions within ten.
Silicon Valley has its robocops too: sensor-packed Knightscope K5s, which resemble a cross between a Dalek and a kitchen appliance. These autonomous machines are already patrolling malls, offices, campuses and local neighbourhoods in the San Francisco Bay area, monitoring traffic and recording unusual behaviour. The city believes that they may cut crime by as much as 50 per cent.
In China, the similar-looking AnBot riot-control machine has been programmed to zap people with an electric current if its onboard AI determines that they constitute a threat – the first obvious example of a machine being programmed to harm human beings at its own volition. In 2015 in the US, tasers were linked with 48 deaths, so the AnBot has the potential to be the first police robot to kill a human being.
And, of course, robots have military applications, too. The US’ Loyal Wingman programme is exploring the potential of converting an F-16 warplane into a semi-autonomous, unmanned fighter: another robot that may decide to take human lives.
Former US Deputy Secretary of Defence, Robert Work, said of the programme: “We might be going up against a competitor that is more willing to delegate authority to machines than we are, and as that competition unfolds, we will have to make decisions on how we best can compete.” In other words, someone else may develop lethal robots, so we must too: the power to kill is a new competitive differentiator.
The caring robot
But not all robots may take the law into their own hands. One of the main robotics applications in the near future will be in care roles for an ageing and socially isolated population.
Kerstin Dautenhahn is Professor of Artificial Intelligence at the University of Hertfordshire. She is exploring how humanoid robots can integrate with human beings “in a socially acceptable manner” as care assistants and companions. The University has purchased a “typically British house” and filled it with robots to explore the ways in which vulnerable people can be made to feel at ease when living among autonomous machines.
Dautenhahn’s main research is in home-assistance robots for elderly residents in smart homes – a scenario being separately developed by Aldebaran for its Romeo robot. Dautenhahn is also exploring the use of therapeutic teaching machines for children with autism, via the University’s KASPAR robot. Similar research programmes have already demonstrated that children on the autism spectrum respond remarkably well to humanoid robots.
In this sense, any lingering fears we might have of blank, emotionless machines are misplaced: those blank, uncaring robots are actively teaching autistic children how to understand, respond to, and express human emotions.
At the same time, harnessing the power of cognitive research is helping other robots to learn good behaviour from their daily interactions with the rest of us too.
But are robots being programmed to be more like humans, or are we being programmed to feel more protective of machines? Witness the spread of humanoids like NAO, Pepper, Robi, and Honda’s ASIMO: all designed to be non-threatening, smaller than an adult, and to have childlike behaviours and voices: clever design decisions to put human beings at ease.
One man knows more about creating humanoid machines than most.
Telepresence robots are an established tech hotspot, allowing a human being in London, for example, to work remotely in San Francisco. But Japan’s Prof. Hiroshi Ishiguro has taken telepresence to the next level. He has become a global robotics industry icon by creating a realistic android of himself, which he sends to conferences to give presentations on his behalf.
He’s also the man behind ‘Erica’, which he calls “the most beautiful and intelligent android in the world”. However, Ishiguro revealed that the female-featured machine, which can hold natural conversations with humans, incorporating body language and non-verbal cues, is essentially “a fake”: behind ‘her’ are ten separate computers.
So why is Ishiguro focusing on the ‘uncanny valley’ of machines that look like humans? The point is the human interface, he explained; to find out how humans respond to a machine that looks exactly like them. “The ideal interface for a human being is another human being,” said Ishiguro. “The android is the fundamental testbed for understanding humanoid (sic) behaviour.”
He explained that his immediate plan is to use cognitive research to develop humanlike robots that have real intentions and desires, and to “archive a human” in android form.
However, Ishiguro’s long-term aim is not to create artificial people, but to compile all of the findings of his current research into a more obviously machine-like form – like the cute humanoids that are designed by his counterpart Prof. Tomotaka Takahashi, founder and CEO of the Robo Garage at Kyoto University. Ultimately, Ishiguro explained that his work is towards forging what he called a “robot society”, a society in which human beings are supported by humanoid robots and androids.
If that sounds sinister, then consider this. First, human beings are already acting more and more like machines out of personal choice, interfacing with each other via social platforms, text, and chat on their smartphones. And second, we don’t regard people who rely on technology to move about, replace missing limbs, or communicate, as being any less than 100 per cent human.
But how far away is the intelligent, autonomous machine that we know from a century of science fiction tales?
Take me to your Lida
Dr. Fumiya Lida of the Bio-Inspired Robotics Lab at Cambridge University’s Department of Engineering, believes that robotics are moving towards a world of “embodied intelligence” as AI and robotics come together over the next 20 years. Robots will not only learn to be more sensitive and responsive to human needs, he said, but also more creative and autonomous.
One Cambridge experiment has seen a robotic arm design and build 500 robots in a week out of smart components, with each one being slightly different to the last: a form of autonomous prototyping that could be left running 24×7.
Dr. Komei Sugiura, Senior Researcher at Japan’s National Institute of Communications Technology (NICT), explained how robotics are already available as on-demand cloud services, via multilingual speech-recognition and synthesis engines – such as Rospeex, which is designed to facilitate human-robot dialogue. IBM’s Watson supercomputer is also available in the cloud to business customers.
But true language learning – as opposed to repeating pre-programmed phrases – remains a challenge, explained Tadahiro Taniguchi, Associate Professor at the College of Information Science and Engineering, at Japan’s Ritsumeikan University.
Taniguchi is exploring how robots can learn languages with no fore-knowledge of them, relate spoken words to objects and events, and understand the relationships between written words, phonemes, and sounds. Those days are not far off: “Unsupervised machine learning is gradually becoming a solvable problem,” he said.
Meanwhile, Prof. Sethu Vijayakumar, Director of the Centre for Robotics at Edinburgh University, stressed that his research is moving away from teleoperation and towards autonomy: robots that interact with human beings based on their own acquired knowledge. But the balance of power should always remain with humans, he said: “We should be looking at collaboration that includes different levels of autonomy at different moments, to reduce human workloads while leaving the human in control.”
Is society ready?
Arthur C Clarke once observed that we tend to overestimate a technology’s impact in the short term, but underestimate it in the long term. However, it’s clear that the widespread uptake of robotics is approaching far more quickly than most people realise, even as software automation spreads through every walk of life.
We already talk to our machines (Siri on the iPhone), and our machines talk to us (the sat navs in our cars), so in that limited sense we’re prepared for the rise of the robots.
Yet while individual humanoids, such as Pepper (production runs of which have sold out in Japan) and Honda’s brand ambassador ASIMO might be impressive feats of engineering and design, most researchers accept that we’ve yet to reach that tipping point in humanoid robotics where a person believes he’s conversing with a truly intelligent, autonomous machine.
But perhaps we have. Aldebaran/SoftBank Robotics recently linked a NAO humanoid with IBM’s Watson supercomputer to create a robot with the voice, personality, and interface of a NAO, but the brain of one of the world’s most advanced computers. This video reveals the impressive results. The facility will be made available to business customers of the Pepper robots.
A computer passed the Turing Test in 2014, convincing a human being that it was a teenage boy during a text-based conversation. So the emergence of intelligent humanoid robots will be as much about coordination as innovation, bringing together software, cloud connectivity, AI, embodied intelligence, engineering, machine learning, and cognitive research.
It’s also about rising network speeds: Ishiguro maintains that ‘the cloud’ is currently too slow for robots’ intelligence to be cloud based when engaged in natural language conversations – although Aldebaran’s collaboration with IBM demonstrates that the time-lag can be marginal.
All of which puts Boston Dynamics – makers of the Atlas and Big Dog robots – centre stage. In June 2017, its former parent, research company X (part of Alphabet) sold the company to SoftBank, Japanese owner of the Aldebaran brand.
Until this news broke, we had been looking at a future in which Alphabet’s robots might one day recognise every face and every street corner on the planet, know what we do, what we say, where we live, and who our friends are – thanks to Google’s technology. But the company backed away from that vision, along with the cost implications of spending 30 per cent of its resources on “things that take ten years”, in the words of its own chief executive. Perhaps we should be grateful for the ‘presentism’ that sometimes afflicts the tech industry.
The First World Data War
But the technologies still exist and the human context behind these advancements is both fascinating and alarming. In many countries, human rights are being rolled back in favour of corporate rights and governance, while wealth is pouring offshore into tax havens and hedge funds – power that is unaccountable, in every sense of the word.
A conspiracy theorist might conclude that we’re experiencing a corporate coup d’état, as a handful of private enterprises and mega-wealthy individuals become more powerful than governments, stripping away countries’ rights to limit corporate activities. Meanwhile, Paul Mason (again) believes we are heading towards a post-capitalist society, spurred by the rise of new technologies, open source development, and voluntary service exchange.
At the same time, we’ve each been complicit in a vast data-gathering exercise about everyone we know, thanks to social media, image tagging, and our relentless quest to record every aspect of our lives.
The stage is set, perhaps, for a global clash of opposing views about what our future should look like, spurred on by the rise of a networked culture. Call it the First World Data War. It is into this world that self-aware robots are slowly emerging. This isn’t an alarmist statement: it’s a simple statement of fact, as the Japan-UK seminar amply demonstrated.
Human and robot rights
So it’s inevitable that a new context for human rights will emerge alongside the robots. Serious questions, such as: Can a robot harm a person? What powers might a law enforcement robot have over a human being? And, who is responsible if a robot injures a person by accident? are already being taken very seriously by robotics experts.
But where does the definition of ‘robot’ begin and end? Should incoming laws also apply to software applications? And if not, why not? After all, a humanoid robot is little more than software with limbs, sensors, and cameras.
Yet how society reacts to the emergence into science fact of a new generation of intelligent, autonomous machines may come down to something less predictable than mere technology advancement: how well the technologists really understand human nature, human rights, and human tolerance in their quest to push back the boundaries of technical achievement.
It’s also conceivable that, in response, human society will revert to being more analog, and leave the digital realm to the machines. But as Oxford University’s Sandberg acknowledged: “Experts in AI are very bad at predicting the future. They generate lots of data, but are very bad at making predictions. So we should expect surprises and incorporate those into our thinking about society. Very advanced systems may behave in unpredictable ways.”
Never forget: human beings and human societies are advanced systems too. Don’t write us off too quickly.
Part 2: The Great British Robot
In my first report, above, inspired by the Japan-UK Robotics and AI Seminar – which took place on 18 February, 2016 – I explored some fast-emerging futures for robotics worldwide, and the impact that these may have on society. Make no mistake, the robots are coming. But will any of them be British?
The first thing to say is that many of the world’s leading robotics experts are British, or work at universities in the UK. There are robotics labs at universities and colleges throughout the country and talented innovators have created dozens of startups; the UK is far from a backwater in robotics terms.
So securing investment is clearly one part of the equation. But for this emerging economy to take root in the UK (as the British government hopes), then something else needs to happen: the government needs to build an atmosphere of technology trust, something it is very bad at doing.
For example, Whitehall’s surveillance programme has done little more than alienate the very providers on which the burden and cost of the policy are being dumped – not to mention alarm civil liberties campaigners and human rights groups. The long-term fudge that will arise from this may destabilise the UK’s digital economy and help break the internet apart into national fiefdoms.
But some real-world applications are no-brainers. For example, it’s a little known fact that the UK will spend £2 billion every year for the next 100 years to clean up and process its nuclear waste – principally that caused by the Cold War arms race, rather than by nuclear power.
Currently, a man in a hazmat suit cuts up that radioactive material by hand with an angle grinder – a very British solution to a massive environmental problem. He then has to be washed down by a team of other hazmat-suited men. Much of that contaminated clothing then has to be trashed, meaning that every barrel of nuclear waste generates ten further barrels of radioactive material – including 10,000 pairs of gloves a day (or 365 million pairs of non-biodegradable, radioactive gloves over the next 100 years).
So, nuclear waste disposal robots present an opportunity to save one UK industry alone up to £200 billion over a century, while drastically cutting the amount of hazardous waste it generates and keeping vulnerable humans out of harm’s way. Japan faces similar problems, notably at the damaged Fukushima plant, which has so far generated 10.7 million one-ton bags of radioactive waste (see picture).
Talented robotics experts such as Dr Rustam Stolkin of the Robotics department at the University of Birmingham and Professor Robin Grimes at Imperial College, London, are working in this area, so you’d think that the UK government would be pouring money into robotics, if only to solve this problem. But is it?
Together, robotics and autonomous/automated systems (RAS) form one of the ‘eight great technologies’ identified by Whitehall to propel the UK towards future prosperity. The government is investing £200-300 million in the sector by 2020, explained Professor Grimes, who is Chief Scientific Adviser to the Foreign and Commonwealth Office and Professor of Materials Physics at Imperial College.
According to Grimes, the hope is that the market for automated systems alone will be worth £191 billion to the local economy – a mere sprat to catch a mackerel in investment terms. Further opportunities abound in robotic software and systems to inspect and repair the UK’s ageing infrastructure, not to mention look after an ageing and potentially lonely population.
Impressive? Not when compared with the hosts of February’s robotics seminar, Japan. The Japanese government says it is investing ¥26 trillion (£161 billion) in robotics by 2020, with the aim of creating a “super-smart society”, according to Embassy spokeswoman Kanae Kurata. That’s real ambition backed by hard funding.
We know that Japan is the epicentre of robotics innovation – certainly in humanoid robots, which have long held the status of pop culture icons in local society. So where are the UK’s strengths, other than in abysmal levels of under-investment? Grimes explained that the UK majors in areas such as computer simulation and has complementary skills in sensors, software, data handling, and what he called “flexible legal environments”. More R&D partnerships are needed between the UK and Japan, he said, pointing to the relationships that already exist between Japanese corporations and British universities, such as Hitachi’s partnership with Edinburgh, for example.
Fixing a hole
Dr Kedar Panya is Head of Engineering at the Engineering and Physical Sciences Research Council (EPSRC), which makes him another of the UK’s key policymakers behind the national strategy for RAS. He said that the UK sees strong economic potential in wireless communications, robotic surgery, and assistive technology: all good things, supported by excellent domestic research.
Unmanned systems, manufacturing, and mobile autonomy – in the air, on land, underground, and at sea – are other key growth hotspots for the UK, he said. For example, Panya explained that there is a long-term plan for robots to replace diggers in Leeds and turn it into the world’s first self-repairing city.
Leeds isn’t the only UK region to be interested in AI and robotics. Llewelyn Morgan, Services Manager for Localities, Policies, and Innovation at Oxfordshire County Council, explained how the authority is developing a transit strategy that embraces autonomous vehicles, making it the only council in the country (and possibly the world) to be doing so.
With Oxfordshire’s roads already at capacity, and with “funding on a downward scale, and challenges on an upwards scale” thanks to Whitehall budget cuts, Morgan explained how AI and “an open, collaborative approach” are providing insights, predictive analytics, and early warnings of travel problems, which can be cross-checked with live sensor data.
Morgan said that the combination of floating data, machine learning, and predictive analytics provides more accurate information than real-time sensor data – predictions that can be tested against live feeds. This is genuine innovation being carried out by talented people on micro-budgets, and the same technical principles could surely be applied in other scenarios, such as predicting extreme weather or the spread of viruses.
Calling Doctor Robot
Medical applications are another research hotspot for robotics. If invasive surgery can be performed with microscopic precision by a robot, using tools that are the width of a human hair, then society will increasingly be forced to define in what ways a human surgeon would perform the task better.
The same principle applies to robotic care assistants in smart homes, which will enable elderly, sick, or vulnerable people to stay in their own homes for longer, reminding them to take their medicines and keeping them company. See my separate report for more on this.
Prof. Guang-Zhong Yang, Director and co-founder of the Hamlyn Centre for Robotic Surgery at Imperial College, explained how medical robots and ultra-precision medical procedures could “transform cancer surgery” and allow early intervention, using tools that are “like current surgical instruments, but smart”, in partnership with human surgeons.
And the applications don’t end there. It’s no giant leap of the imagination to suppose that with other technologies, such as machine learning, big data analytics, and 3D printing, opening up new worlds of engineering precision and construction, other realms that have long defined human culture and endeavour, such as architecture, industrial design, and engineering, might become the preserve of robots too.
Underfunding: upsides and downsides
Whether such innovations might be good or bad for human society, creativity, social cohesion, rights, and employment, is an important question.
As I explained above, the question is not if automation and robotics will spread into those areas of human endeavour that can be easily defined, but when, and how. This presents challenges and opportunities to human society. The question is what jobs, what roles in society, might remain the sole preserve of human beings? See this separate report for more.
On the face of it, the vague answer might seem to be ‘creative jobs’, but while the digital economy might be building more and more platforms for creative people – writers, musicians, designers, and so on – it is also making it much harder for them to earn a living, as their services are commoditised by the network effect.
Where the balance is struck, and whether intelligent machines will make more and more people redundant in the workplace, or do the opposite by enhancing their capabilities, may come down to whether our policymakers and strategists have even considered these futures – and, of course, whether human beings will tolerate it for long.
And it will also come down to our levels of real-world investment in what the government already accepts is a key technology set for the UK’s prosperity. So the fact that the UK’s investment levels are lamentable when set alongside those of our technology partner, Japan, is unfortunate for the technologists, but a comfort for the alarmists.
Edited, earlier versions of these two reports were first published by Diginomica.
© Chris Middleton 2016. All rights reserved.