How to design benevolent robots and good artificial intelligence.
The RSA and others silence the noise about killer machines by proposing some realistic solutions. Chris Middleton reports.
Tabloids, human rights organisations, and tech visionaries such as Elon Musk have all warned of the rise of malignant machine intelligence, but what can we do to prevent it from happening? How can we make way for a more benevolent machine?
The first thing the tech sector must do is shift its focus from the apocalyptic to the mundane. While fears about out-of-control AIs or autonomous weapons are justifiable – as the military, intelligence agencies, and police circle the technologies like drones – many of the societal risks may be subtler and more insidious, if organisations rush in tactically for short-term gain.
The RSA has published an 83-page report, the Age of Automation, which examines the likely socio-economic impact of robotics, AI, and autonomous systems – both good and bad. It suggests a range of scenarios that industries and governments must work together to avoid. These include:
Narrow, tactical implementations of AI could lead to an “entrenchment of demographic biases”, says the RSA, while amplifying workplace discrimination and blocking people from employment based on their age, ethnicity, or gender [issues that are examined in my separate report].
The RSA continues, “Equipped with AI systems, organisations will have greater precision in predicting people’s behaviours and the risks they face. This could lead to certain groups being denied access to goods, services [such as insurance], and employment opportunities.”
In some cases, these impacts may be covert and deliberate, and in others rooted in flawed or partial training data; information gathered from human society that has long disadvantaged women or ethnic minorities, for example. In the UK, just 17 per cent of STEM positions are occupied by women, which in itself has a negative impact on AI and robotics development.
Diversity needs to increase, says the RSA: “When machines are only built by a small group in society, they will ultimately only tackle the problems of a small group in society.”
“Tech companies should redouble their efforts to recruit a more diverse cohort of programmers and managers, for example by partnering with groups like Code First: Girls and InterTech LGBT.”
The big picture
So a holistic, sustainable outlook is essential if everyone is to benefit from a technology set that has the potential to be transformative, beneficial, and complementary to human skills. As my separate report suggests, the UN’s Sustainable Development Goals could form the basis of a new regulatory framework.
But what about the details? While the RSA sets out the risks, challenges, and opportunities for the UK in these booming markets, it also makes a number of recommendations on how the tech industry as a whole can cooperate with governments to ensure that the impacts are positive.
First, the industry should develop “benevolent machines”, says the RSA, with programmers, tech companies, and investors “steered towards developing benign forms of technology”. That’s easy to say, but what does it mean?
The RSA explains: “Society can, and should, shape the development of machines. Progressive elements of the tech community should take a lead in drafting and signing up to ethical frameworks that would steer programmer behaviour, as the IEEE has done in the US.”
In 2016, the IEEE Standards Association published its own report, Ethically Aligned Design: A Vision for Prioritising Human Wellbeing with Artificial Intelligence and Autonomous Systems. There have been other recent examples of the sector stepping up to the plate. Google, Facebook, Apple, Microsoft, Amazon, IBM, and DeepMind are founder members of pressure group www.partnershiponai.org, which commits them to a voluntary ethical code.
More, the RSA suggests that ethics should be a compulsory module in computer science courses, with developers being asked to sign an ethical pledge, akin to doctors’ Hippocratic Oath.
Show me the money
But there’s another dimension to the ethical development debate: who pays the bills?
Worldwide investment in robotics and AI may be significant, as countries race to stockpile IP, yet much of the money will come from venture capital or private equity funds, which may wish to push the technologies towards narrow commercial targets, suggests the report. Mobilising the social investment community to sponsor benevolent applications and use cases would be one way to counter this: “Philanthropic foundations and socially conscious investors also have a role to play, by funding technologies that have more benign effects on workers.”
The RSA says governments should also plough more public funds into robotics and AI, in order to influence their development along socially advantageous lines – as Japan has, with its £161 billion investment in building a “super-smart society”. In AI, many analysts see a two-horse race emerging between the US’ Silicon Valley powerhouses and China, with the latter having a major advantage.
As science and culture blog The Verge puts it: “To build great AI, you need data, and nothing produces data quite like humans. This mean’s China’s massive 1.4 billion population (including some 730 million internet users) might be its biggest advantage. These citizens produce reams of useful information that can be mined by the country’s tech giants, and China is also significantly more permissive when it comes to users’ privacy.”
Yet a disadvantage in global terms is the extent to which those reams of Chinese data may be walled off from the West.
In the UK, the RSA urges the government to supercharge its spending on robotics and AI, and says: “Part of this funding should be used to launch a new mission that rewards researchers developing machines that boost the quality of work, for example ‘cobots’ that augment human labour. The UK government should look to partner with like-minded countries on such an initiative.”
The UK should also establish a national centre for AI and Robotics, adds the RSA. This new organisation would be tasked with diffusing the technologies throughout the economy, coordinating a “national mission to use AI and robotics for the advancement of good work”.
However, the report’s authors have left a sting in the tail for readers in Whitehall: up to 80 per cent of the UK’s existing investment in robotics and autonomous systems (RAS) comes from the EU, according to the RSA (quoting figures from Parliament’s Science and technology Select Committee), funds which must now be under threat.
The industry’s to-do list
The report makes further recommendations that can be applied by a partnership of government and industry worldwide. In partnership with industry, it says that governments should:
- Future-proof the workforce. Educators must prepare future generations with the skills and competencies that will allow them to thrive in an automated economy. The need to do this is set out in my separate report.
- Encourage organisations to co-create automation strategies with their workforce. Professors Leslie Willcocks (LSE) and Mary Lacity (University of Missouri-St. Louis) have suggested that technology is more likely to be integrated in organisations when the C-Suite is actively involved in spelling out the benefits, in discussion with staff.
- Create a modern social contract. Our tax and welfare systems must evolve so that those who reap the most rewards from automation support those who lose the most, says the RSA. This was a view echoed by techUK in its manifesto for digital change this year, which campaigned for the creation of a new industrial strategy.
The RSA suggests that “flexicurity” should be embedded in the new world of work, via a universal basic income and by championing cooperative schemes in which more and more workers become owners of the business. More, the tax burden should be shifted away from labour and towards capital, it says.
These are all good, utilitarian, even utopian ideas, and some political commentators may regard them as a socialist manifesto. But arguably, these are simply the kinds of principles that are implicit in the flat, networked, collaborative processes that the technologies themselves enable.
Attempting to impose traditional, top-down, hierarchical governance on peer-to-peer processes is where bad policy decisions are made and societies turn against themselves. The networked world favours the many, not the few, and companies such as the self-organising Satalia – in which a person’s influence within the organisation coalesces around their skill set – have proven that new organisational structures naturally emerge in the sweet spot where peer-to-peer technologies and open data meet.
But the challenge remains that some governments are still applying 19th Century industrial thinking to the new world of work. They seem to believe that AI, robotics, automation, the IoT, cloud technologies, and data analytics are about maximising benefits to the few via slashed costs and improved industrial efficiency – coupled with trickle-down philanthropy from gentlemen in stovepipe hats, perhaps.
But industry is no longer run by the likes of Isambard Kingdom Brunel, and it is time that some governments’ social and economic policies finally woke up to that fact. If people like him still exist, they are headquartered in Bermuda, or – like Elon Musk – are warning of Armageddon even as they build the road towards it.
This deep-seated misunderstanding of robotics and AI in government circles, along with a host of other technologies, is why some countries are investing too little, too late, in the wrong areas, and for all the wrong reasons.
But at present, there’s little sign of that changing as we doff our caps, knuckle down, and hope for the best. Ultimately, benevolent machines can only emerge from benevolent societies. We will get the robots we deserve.
• A version of this article was first published on diginomica.
• For more articles on robotics, AI, and automation, go to the Robotics Expert page.
© Chris Middleton 2017.