Chris Middleton hears from the world’s top robotics experts.
IN-DEPTH RESEARCH AND ANALYSIS. In this two-part, 5,000-word special report on the global rise and risks of humanoid robots, AI, machine learning, and automation, Chris Middleton hears from some of the world’s leading robotics experts. In Part 1, below, he explores the ethical implications of mass automation and AI, and the influx of humanoid machines to our societies. In Part 2, beneath, he looks at further applications, and at the position of the UK in this emerging economy.
Part 1: Half of all jobs will be taken by robots, claim scientists
UPDATED MAY 2016. If your job can be easily defined, then it can – and will – be automated. That was the stark message from the Japan-UK Robotics and AI seminar, which took place at the Japanese Embassy in London in February, 2016. The event brought together experts to share knowledge and foster collaboration between the two countries.
So, if it is true that a job’s ‘definability’ is the same as its ability to be automated, then robotic software, hardware robots, and AI pose a much greater challenge or threat to human society than most people realise. It won’t just be the robotic ATM, checkout, or ticket kiosk serving you, but also robot nurses, doctors, care assistants, lawyers, schoolteachers, and more: all roles that require human intuition, intelligence, and empathy.
That robots might take on simple rules-based, repetitive, or low-skilled roles is not a new idea, but in a world of escalating compliance, which of today’s jobs isn’t based on rules, targets, and spreadsheets – algorithms in all but name? And with autonomous cars, delivery trucks, and drones thrown into the mix, it’s clear that more and more tasks will cease to be carried out by human beings.
The origin of the word ‘robot’ is the Czech word robota, meaning ‘forced labour’ – coined into popular usage in Karel Čapek’s 1920 play ‘Rossum’s Universal Robots’. But while most people focus on the ethical aspects of robots resembling and replacing human beings – as the False Maria robot did in Fritz Lang’s 1929 film ‘Metropolis’ – few people consider the flip side of that process: human beings behaving more and more like robots in a world of repetitive, highly regulated processes. That, I would argue, is where the real danger lies in an increasingly automated world. [For more on this, see We are the Robots, on this website.]
The risk of social upheaval is one reason why robotics research is increasingly taking place in multi-disciplinary teams: not only of computer scientists and engineers, but also of psychologists, cultural theorists, ethics experts, and cognitive researchers. Robotics is no longer just about scaling a great technology Everest just because it’s there.
Living with robots
Dr Anders Sandberg is James Martin Research Fellow at Oxford University’s Future of Humanity Institute. He believes that in the future nearly half of all jobs (47 per cent) will be automated, with those that can be easily described being the easiest to hand over to machines.
If this sounds far-fetched, then consider this: robotic production lines have been with us for decades, while software robots, touchscreen interfaces, kiosks, and more, have already replaced supermarket tills, ticket offices, reception desks, hotel check-ins, bank counters, and security gates. In some large retailers, people are employed to ensure that the self-service tills are working properly, effectively making human beings subservient to machines.
Pepper, the emotion-sensing humanoid made by French company Aldebaran Robotics (now 95 per cent owned by Japan’s SoftBank), is already employed in shops and cafés in several parts of the world, including in SoftBank stores and some Carrefour supermarkets. [Here’s a 2015 interview with an English-speaking version of the robot.]
Several international banks plan mass software automation – at HSBC, and at Bank of America’s investment arm Merrill Edge, for example – essentially turning them into money-making machines. Organisations’ HR and F&A functions are among other areas being automated, because rule- and policy-based systems are already little more than algorithms.
The conclusion is inescapable: a robot doesn’t need to have a face or body to take your job.
RBS is replacing hundreds of face-to-face customer service staff with automated services, which it claims frees up its remaining human agents to be expert advisors. But there’s a caveat: those advisors will only be available to its wealthiest clients, making face-to-face human service into a luxury item for those that can afford it.
Meanwhile, Bloomberg and Associated Press are pursuing automation in the production of journalism and other editorial products and services: automated news, generated by robots. That material will be based on press releases and trending topics. In this way, journalism may become indistinguishable from marketing collateral.
Welcome to a world of machine -generated, machine-readable code, published and read by other machines. In both cases, the replacement of human beings is seen a competitive differentiator in a globalised, cutthroat market.
But anyone seeking legal advice about their employment prospects should be equally wary: the world’s first AI lawyer is already winning business. Elsewhere, Google’s AI has started writing poetry – some of it rather good. But at least it’s generating page views for the late human poet, Ai Ogawa.
The moral robot
Mass unemployment and social divisions aside, Oxford University’s Sandberg acknowledged that there are other serious ethical problems with the idea of robots taking jobs from humans, not to mention technical obstacles. Among these is the challenge of establishing robot-machine empathy in a world in which “the network doesn’t care” and we can barely explain shared human values to each other, let alone to machines.
Sandberg said (for brevity’s sake, I’ve condensed his comments): “Robots have to navigate a human-shaped world and understand human intentions. […] It’s easier to make an efficient robot than a [morally] good robot. […] A law-abiding robot is not the same as an ethical robot. […] Intelligence and values are not the same thing.”
His point was underlined in March 2016 by the storm over Microsoft’s AI Twitter chatbot, Tay, which learned racism, homophobia, and drug culture from internet trolls and was hastily taken offline.
As one commentator put, it went from saying “humans are super cool” to extolling nazi values in less than 24 hours – a useful analog of extremism’s links with ignorance and peer pressure in our meme-propelled culture.
But were the trolls really to blame? As journalist Paul Mason noted in his Guardian blog, Tay was feeding off the deep undercurrents of prejudice and hate speech that lurk near the surface of many a social platform.
Or at least they do in the West. Tay’s Chinese AI counterpart, Microsoft’s XiaoIce, has not faced the same problems and has been liaising safely with millions of people online. This suggests a troubling possibility: that AI/machine learning and unfettered freedom of speech may be mutually exclusive concepts, unless controls are added to help robots filter out human beings’ basest instincts.
The politics of algorithms
Algorithms always have social and ethical dimensions. They reflect the values and beliefs of the societies or organisations in which they are written – not to mention the interests of corporate shareholders and whatever outcomes may benefit them the most.
In Japan’s Nagasaki prefecture, the Henn-na hotel is staffed and run entirely by robots – not as a gimmick, but because robots are cheaper to employ than people. [For more on this, read this separate report.]
And not all shareholders have local taxpayers’ interests at heart. A recent survey by consultancy Avanade of 500 C-level executives worldwide found that 77 per cent believe they have not given much thought to the ethical considerations created by smart technologies, robotics, and automation, suggesting that ‘automate first, ask questions later’ is the dominant mindset in any quest to cut costs and drive up profits.
Unless robots are programmed to obey Isaac Asimov-style universal laws, they can only learn, or be programmed with, behaviour from flawed human beings within whatever legal frameworks, political beliefs, or cultural norms exist locally.
Tay demonstrates that machines can learn hatred by observing and modelling human society – and in the future, some may be programmed within institutionally prejudiced organisations, or in countries that have poor human rights records.
The law-enforcement robot
In the future, this nexus of issues may be of particular relevance in law enforcement; the world of ‘Robocop’ is not so far away.
For example, the United Arab Emirates has one of the most advanced police forces in the world, and it is investing heavily in robots and smart-city capabilities, along with technologies such as Google Glass. At present, its robots are being used in public liaison roles, but ‘full AI’ robots may be on the streets of Dubai within five years, and occupying law-enforcement positions within ten.
Silicon Valley has its robocops too: sensor-packed Knightscope K5s, which resemble a cross between a Dalek and a kitchen appliance. These autonomous machines are already patrolling malls, offices, campuses and local neighbourhoods in the San Francisco Bay area, monitoring traffic and recording unusual behaviour. The city believes that they may cut crime by as much as 50 per cent.
In China, the similar-looking AnBot riot-control robot has been programmed to zap people with an electric current if its artificial intelligence determines that they constitute a threat – the first obvious example of a machine being programmed to harm human beings at its own volition. In 2015 in the US, tasers were linked with 48 deaths, so the AnBot has the potential to be the first police robot to kill a human being.
And, of course, robots have military applications, too. The US’ Loyal Wingman programme is exploring the potential of converting an F-16 warplane into a semi-autonomous, unmanned fighter: another robot that may decide to take human lives.
US Deputy Secretary of Defence, Robert Work, recently said: “We might be going up against a competitor that is more willing to delegate authority to machines than we are, and as that competition unfolds, we will have to make decisions on how we best can compete.” In other words, someone else may develop lethal robots, so we must too: the power to kill is the new competitive differentiator.
The caring robot
But not all robots may take the law into their own hands. One of the main robotics applications in the near future will be in care roles for an ageing and socially isolated population.
Kerstin Dautenhahn, Professor of Artificial Intelligence at the University of Hertfordshire, is exploring how humanoid robots can integrate with human beings “in a socially acceptable manner” as care assistants and companions. The University has purchased a “typically British house” and filled it with robots to explore the ways in which vulnerable people can feel at ease when living among autonomous machines.
Dautenhahn’s main research is in home-assistance robots for elderly residents in smart homes – a scenario being separately developed by Aldebaran for its Romeo robot. Dautenhahn is also exploring the use of therapeutic educational machines for children with autism, via the University’s KASPAR robot. Similar research programmes have already demonstrated that children on the autism spectrum respond remarkably well to humanoid robots.
In this sense, any lingering fears we might have of blank, emotionless machines are misplaced: incredible though it may seem, those blank, uncaring robots are actively teaching autistic children how to understand, respond to, and express human emotions.
At the same time, harnessing the power of cognitive research is helping other robots to learn good behaviour from their daily interactions with the rest of us too – despite Microsoft’s spectacular own-goal with Tay.
But are robots being programmed to be more like humans, or are we being programmed to feel more protective of robots? Witness the spread of humanoids like NAO, Pepper, Robi, and Honda’s ASIMO: all designed to be non-threatening, smaller than an adult, and to have sometimes childlike behaviours and voices: clever design decisions to put human beings at ease.
One man knows more about creating humanoid machines than most.
Telepresence robots are an established tech hotspot, allowing a human being in London to work remotely in San Francisco, for example. But Japan’s Prof. Hiroshi Ishiguro has become a global robotics icon by creating a realistic android of himself, which he sends to conferences to give presentations on his behalf.
He’s also the man behind ‘Erica’, which he calls “the most beautiful and intelligent android in the world”. However, Ishiguro revealed that the female-featured android, which can hold natural conversations with humans, incorporating body language and non-verbal cues, is essentially “a fake”: behind ‘her’ are ten separate computers.
So why is Ishiguro focusing on the ‘uncanny valley’ of machines that look like humans? The point is the human interface, he explained; to find out how humans respond to a machine that looks exactly like them. “The ideal interface for a human being is another human being,” said Ishiguro. “The android is the fundamental testbed for understanding humanoid (sic) behaviour.”
He explained that his immediate plan is to use cognitive research to develop humanlike robots that have real intentions and desires, and to “archive a human” in android form.
However, Ishiguro’s long-term aim is not to create artificial people, as such, but to compile the findings of his current research into an obviously machine-like form – more like the cute humanoids that are designed by his counterpart Prof. Tomotaka Takahashi, founder and CEO of the Robo Garage at Kyoto University.
Ultimately, Ishiguro explained that his work is towards forging what he called a “robot society”, a society in which human beings are supported by humanoid robots and androids.
If that sounds sinister, then consider this. First, human beings are already acting more and more like machines out of personal choice, interfacing with each other via social platforms, text, and chat on their smartphones. And second, we don’t regard people who rely on technology to move about, replace missing limbs, or communicate, as being any less than 100 per cent human.
But how far away is the intelligent, fully autonomous machine that we know from a century of science fiction tales?
Take me to your Lida
Dr. Fumiya Lida of the Bio-Inspired Robotics Lab at Cambridge University’s Department of Engineering, believes that robotics are moving towards a world of “embodied intelligence” as AI and robotics come together over the next 20 years. Robots will not only learn to be more sensitive and responsive to human needs, he said, but also more creative and autonomous.
One Cambridge experiment has seen a robotic arm design and build 500 robots in a week out of smart components, with each one being slightly different to the last: a form of industrialised, autonomous prototyping that could be left running 24×7.
Dr. Komei Sugiura, Senior Researcher at Japan’s National Institute of Communications Technology (NICT), explained how robotics are already available as on-demand cloud services, via multilingual speech-recognition and synthesis engines – such as Rospeex, which is designed to facilitate human-robot dialogue. IBM’s Watson supercomputer is also available in the cloud to business customers.
But true language learning – as opposed to repeating pre-programmed phrases – remains a challenge, explained Tadahiro Taniguchi, Associate Professor at the College of Information Science and Engineering, at Japan’s Ritsumeikan University.
Taniguchi is exploring how robots can learn languages with no fore-knowledge of them, relate spoken words to objects and events, and understand the relationships between written words, phonemes, and sounds. Those days are not far off: “Unsupervised machine learning is gradually becoming a solvable problem,” he said.
Meanwhile, Prof. Sethu Vijayakumar, Director of the Centre for Robotics at Edinburgh University, stressed that his research is moving away from teleoperation and towards autonomy: robots that interact with human beings based on their own acquired knowledge.
But the balance of power should always remain with humans, he said: “We should be looking at collaboration that includes different levels of autonomy at different moments, to reduce human workloads while leaving the human in control.”
Is society ready?
Arthur C Clarke once observed that we tend to overestimate a technology’s impact in the short term, but underestimate it in the long term. However, it’s clear that the widespread uptake of robotics is approaching far more quickly than most people realise, even as software automation spreads through every walk of life.
We already talk to our machines (Siri on the iPhone), and our machines talk to us (the sat navs in our cars), so in that limited sense we’re prepared for the rise of the robots.
Yet while individual humanoids, such as Pepper (production runs of which have sold out in Japan) and Honda’s brand ambassador ASIMO might be impressive feats of engineering and design, we’ve yet to reach that tipping point in humanoid robotics where a person believes he’s conversing with a truly intelligent, autonomous machine.
Or perhaps we have.
Aldebaran/SoftBank Robotics recently linked a NAO-25 humanoid with IBM’s Watson supercomputer to create a robot with the voice, personality, and interface of a NAO, but the brain of one of the world’s most advanced computers. This video reveals the impressive results. The facility will also be available to business customers of the Pepper robots.
A computer passed the Turing Test in 2014, convincing a human being that it was a teenage boy during a text-based conversation. So today, the emergence of intelligent humanoid robots is as much about coordination as innovation, bringing together software, cloud connectivity, AI, embodied intelligence, engineering, machine learning, and cognitive research.
It’s also about rising network speeds: Ishiguro maintains that ‘the cloud’ is currently too slow for robots’ intelligence to be cloud based when engaged in natural language conversations – although Aldebaran’s collaboration with IBM demonstrates that the time-lag can be marginal.
All of which puts Alphabet (Google) in a powerful position. Boston Dynamics’ Atlas robot is on the prowl, but its parent, research company X (formerly Google X, now part of Alphabet), is reportedly looking to offload the robotics division, with Amazon potentially in the frame to buy it.
Until this news broke in mid-March 2016, we had been looking at a future in which Alphabet’s robots might one day recognise our faces and every street corner on the planet, know what we do, what we say, where we live, and who our friends are. But the company is backing away from that vision, along with the cost implications of spending 30 per cent of its resources on “things that take ten years”, in the words of its own chief executive.
The First World Data War
But the technologies still exist and the social context behind these advancements is both interesting and alarming. In many countries, human rights are being rolled back in favour of corporate rights and governance (trade deal TTIP, for example), while wealth is pouring offshore into tax havens and hedge funds – power that is unaccountable, in every sense of the word.
A conspiracy theorist might conclude that we’re experiencing a corporate coup d’état, as a handful of private enterprises and mega-wealthy individuals become more powerful than governments, stripping away national rights to limit corporate activities – even as some governments, such as the UK, protest about political ‘superstates’ while handing ever greater power to the multinationals.
Meanwhile, Paul Mason (again) believes we are heading towards a post-capitalist society, spurred by the rise of new technologies, open source development, and voluntary service exchange.
At the same time, we’ve each been complicit in a vast data-gathering exercise about everyone we know, thanks to social media, image tagging, and our relentless quest to record every aspect of our lives (which I would argue is the only meaningful context for the British government’s surveillance scheme; they’re building ‘The Data Bank of England’, in effect, a phrase I coined a few years ago when Editor of Computing).
The stage is set, therefore, for a global clash of views about what our future should look like, spurred on either side by the rise of a networked culture. Call it the First World Data War. It is into this world that self-aware robots are slowly emerging. This isn’t an alarmist statement: it’s a simple matter of fact, as the Japan-UK seminar amply demonstrated.
Human and robot rights
So it’s inevitable that a new context for human rights will emerge alongside the robots. Serious questions, such as: Can a robot harm a person? What powers might a law enforcement robot have over a human being? And, who is responsible if a robot injures a person by accident? are already being taken very seriously by robotics experts.
But where does the definition of ‘robot’ begin and end? Should incoming laws also apply to software applications? And if not, why not? After all, a humanoid robot is little more than software with limbs, mics, sensors, and cameras.
Yet how society reacts to the emergence into science fact of a new generation of intelligent, autonomous machines will come down to some less predictable things than mere technology advancement, not least of which how well the technologists really understand human nature, human rights, and human tolerance in their quest to push back the boundaries of technical achievement.
It’s also conceivable that, in response, human society will revert to being more analog, and leave the digital realm to the machines.
But as Oxford University’s Sandberg acknowledged: “Even experts in AI are very bad at predicting the future. They generate lots of data, but are very bad at making predictions. So we should expect surprises and incorporate that into our thinking about society. Very advanced systems may behave in unpredictable ways.”
Never forget: human beings and human societies are advanced systems too. Don’t write us off too quickly.
Part 2: The Great British Robot
In my first report, above, inspired by the Japan-UK Robotics and AI Seminar – which took place on 18 February, 2016, at the Japanese Embassy in Piccadilly – I explored some fast-emerging futures for robotics worldwide, and the impact that these may have on society.
Make no mistake, the robots are coming. But will any of them be British?
The first thing to say is that many of the world’s leading robotics experts are British, and/or work at British universities. There are robotics labs at universities and colleges throughout the UK and talented innovators have created dozens of startups; the UK is far from a backwater in robotics terms.
So securing investment is clearly one part of the equation. But for this emerging economy to take root in the UK (as the British government hopes), then something else needs to happen too: the UK government needs to build an atmosphere of technology trust, something it is very bad at doing.
For example, Whitehall’s proposed surveillance programme has done little more than alienate the very providers on which the burden and cost of this policy will soon be dumped – not to mention alarm civil liberties campaigners and human rights groups. The long-term fudge that will arise from this will destabilise the UK’s digital economy and help break the internet apart into national fiefdoms. In that scenario, everyone loses.
But some real-world applications are no-brainers for the UK, not least of which is the climbdown from being a global power. It’s a little known fact that the UK will spend £2 billion every year for the next 100 years to clean up and process our nuclear waste – principally the waste caused by the Cold War arms race, rather than nuclear power.
Currently, a man in a hazmat suit cuts up that radioactive material by hand with an angle grinder – a fantastically British solution to a massive environmental problem. He then has to be washed down by a team of other hazmat-suited men. Much of that contaminated clothing then has to be trashed, meaning that every barrel of nuclear waste generates ten further barrels of radioactive material – including 10,000 pairs of gloves every day (or 365 million pairs of non-biodegradable, radioactive gloves over the next 100 years).
So, nuclear waste disposal robots present an opportunity to save one UK industry alone up to £200 billion over a century, while drastically cutting the amount of hazardous waste it generates and removing vulnerable human beings from harm’s way. Japan faces similar problems, notably at the damaged Fukushima plant, which has so far generated 10.7 million one-ton bags of radioactive waste (see picture).
Talented robotics experts such as Dr Rustam Stolkin of the Robotics department at the University of Birmingham and Professor Robin Grimes at Imperial College, London, are working in this very area, so you’d think that the UK government would be pouring money into robotics – if only to solve this problem. But is it?
Together, robotics and autonomous/automated systems (RAS, trust the UK to invent a pointless acronym) form one of the ‘eight great technologies’ identified by Whitehall to propel the UK towards future prosperity. The government is investing £200-300 million in the sector by 2020, explained Professor Grimes, who is Chief Scientific Adviser to the Foreign and Commonwealth Office and Professor of Materials Physics at Imperial College.
According to Grimes, the hope is that the market for automated systems alone will be worth £191 billion to the local economy – a mere sprat to catch a mackerel in investment terms. Further opportunities abound in robotic software and systems to inspect and repair the UK’s ageing infrastructure, not to mention look after an ageing and potentially lonely population.
Impressive? Not when you consider that Whitehall spent between £456 billion and £1.16 trillion (the figures have fluctuated) on bailing out the UK’s banks. And not when compared with the hosts of February’s robotics seminar, Japan.
The Japanese government says it is investing ¥26 trillion (£161 billion) in robotics by 2020, with the aim of creating a “super-smart society”, according to Embassy spokeswoman Kanae Kurata. That’s real ambition backed by hard funding.
We know that Japan is the epicentre of robotics innovation – certainly in humanoid robots, which have long held the status of pop culture icons in Japanese society. So where are the UK’s strengths, other than in abysmal levels of under-investment?
Grimes explained that the UK majors in areas such as computer simulation and has complementary skills in sensors, software, data handling, and what he called “flexible legal environments”. More R&D partnerships are needed between the UK and Japan, he said, pointing to the relationships that already exist between Japanese corporations and British universities, such as Hitachi and Edinburgh, for example.
Fixing a hole
Dr Kedar Panya is Head of Engineering at the Engineering and Physical Sciences Research Council (EPSRC), which makes him another of the UK’s key policymakers behind the national strategy for RAS (there’s that horrible acronym again). He said that the UK sees strong economic potential in wireless communications, robotic surgery, and assistive technology: all good things, supported by excellent domestic research.
Unmanned systems, manufacturing, and “mobile autonomy” – in the air, on land, underground, and at sea – are other key growth hotspots for the UK, he said. For example, Panya explained that there is a long-term plan for robots to replace diggers in Leeds and turn it into the world’s first self-repairing city. (Good luck with your PR and worker-relations in the city, Dr Panya!)
Leeds isn’t the only UK region to be interested in AI and robotics, however. Llewelyn Morgan, Services Manager for Localities, Policies, and Innovation at Oxfordshire County Council, explained how the authority is developing a transit strategy that embraces autonomous vehicles, making it the only council in the country (and possibly the world) to be doing so, he claimed.
With Oxfordshire’s roads already at full capacity, and with “funding on a downward scale, and challenges on an upwards scale” thanks to Whitehall budget cuts, Morgan explained how machine learning and AI combined “with an open, collaborative approach” are providing insights, predictive analytics, and early warnings of travel problems, which can be cross-checked with live sensor data.
Morgan said that the combination of floating data, machine learning, and predictive analytics provides more accurate information than real-time sensor data – predictions that can then be tested against live feeds. This is genuine innovation being carried out by talented people on micro-budgets, and the same technical principles could surely be applied in many other scenarios, such as predicting extreme weather or the spread of viruses.
Calling Doctor Robot
Medical applications are another research hotspot for robotics. If invasive surgery can be performed with microscopic precision by a robot, using tools that are the width of a human hair, then society will increasingly be forced to define in what ways a human surgeon would perform better.
The same principle applies to robotic care assistants in smart homes, which will enable elderly, sick, or vulnerable people to stay in their own home for much longer, reminding them to take medicine and keeping them company – according to speakers at February’s event.
And the more that the UK’s national healthcare policy shifts its focus away from providing public (taxpayer) value to creating private (shareholder) profit, the more likely it is that automation will increase so that offshore investors can run profitable businesses. (An overstatement of the tension in the healthcare sector between shareholder and taxpayer value? Not according to this big pharma CEO.)
Automation favours the algorithm writer, as I observed in a separate article published on this website. Organisations automate for financial and tactical gain, and if the investors are offshore in a tax haven, then local social cohesion, human rights, and employment will barely register as blips on their ethical radar.
Nevertheless, Prof. Guang-Zhong Yang, Director and co-founder of the Hamlyn Centre for Robotic Surgery at Imperial College, explained how medical robots and ultra-precision medical procedures could “transform cancer surgery” and allow early intervention, using tools that are “like current surgical instruments, but smart”, in partnership with human surgeons.
And the applications don’t end there. It’s no giant leap of the imagination to suppose that with other technologies, such as machine learning, big data analytics, and 3D printing, opening up new worlds of engineering precision and construction, other realms that have long defined human culture and endeavour, such as architecture, industrial design, and engineering, might become the preserve of robots too.
Underfunding: upsides and downsides
Whether such innovations might be good or bad for human society, creativity, social cohesion, rights, and employment, is a vital question.
As I explained above, the question is not if automation and robotics will spread into those areas of human endeavour that can be easily defined, but when, and how, if for no other reason than the perceived economic advantages they create for shareholder-focused private enterprises. This presents challenges and opportunities to human society, but also – clearly – serious threats.
Given the direction of research that is already being carried out in Japan, the UK, and elsewhere, what jobs, what roles in society, might remain the sole preserve of human beings?
On the surface, one vague answer might appear to be ‘creative jobs’, but while the networked economy might be building more and more platforms for creative people, be they writers, musicians, designers, artists, photographers, and so on, it is also making it much harder for them to earn a living.
Where the balance is struck, and whether intelligent machines will make more and more people redundant in the workplace or do the opposite by enhancing their human capabilities may come down to whether our policymakers and strategists have even considered these futures – and, of course, whether human beings will tolerate it for long.
And it will also come down to our levels of real-world investment in what the government already accepts is a key technology set for the UK’s prosperity. So the fact that the UK’s investment levels are lamentable when set alongside those of our technology partner, Japan, is unfortunate for the technologists, but a comfort for the alarmists and anyone else who worries about The Terminator. © Chris Middleton 2016
Edited, earlier versions of these two reports were first published by Diginomica.
Also by Chris Middleton:
The Political Algorithm
The Snooper’s Nightmare
The Robot Banking Machine Doesn’t Add Up (diginomica)
F&A: The Next Target for Robots? (diginomica)
Pinterest board of robot designs
© Chris Middleton 2016. All rights reserved.