A.I: Friend or Foe?

Why you should implement AI strategically, and not tactically


Chris Middleton looks at how organisations are innovating with artificial intelligence, and what the knock-on effects on our lives might be – both good and bad.

Implemented strategically, artificial intelligence (AI) can be a transformative technology; but deployed for the wrong reasons it may create new problems for your organisation. Let’s explore a number of use cases as examples, and then look at the big picture of AI’s future development and the ethical challenges this presents.

Viva voce

The rise of voice-based AIs such as Apple’s Siri, Amazon’s Alexa, and Google’s Assistant, promises many things. Among these is a new era of ‘ambient computing’, in which our link with computers is no longer dependant on a screen and graphical user interface (GUI), but on intelligence embedded in the cloud, and in our homes, offices, public spaces, and cities.

Voice-activated AI moves computing much closer to how human beings communicate in the physical world, but also to how we compete for each other’s attention. So an implicit risk in a voice-activated AI world is that our environments become increasingly noisy, in every sense. Fellow travellers on this journey will be apps that allow us to filter out messages that we don’t want to hear, so we can focus on our preferences.

One advantage of AI is that it can learn our likes and dislikes, turning up the volume of the social echo chambers in which many of us sit. A further risk, therefore, is that we may soon live in a world that only tells us what we want to hear: signals tuned to us, with everything else as filtered out as background noise. We can see the beginnings of that process happening already on social platforms.

So it is good to know that AI-powered fact-checking services will be with us en masse, too: Google and Facebook are both investing heavily the technology, alongside startups such as the UK’s Full Fact, and the Le Monde newspaper’s smart search engine. Perhaps the era of ‘fake news’ and ‘alternative facts’ may be short lived.

Might AIs like these force politicians to tell the truth? Or will politics and the gaming of smart search engines become more closely intertwined – just as journalism, SEO, and advertising have in a media landscape that’s increasingly about search-optimised marketing?

The chat show

What news agency Reuters calls ‘conversational commerce’ will also be with us, as more and more chatbots enter our lives. The news agency has produced a report on the new media landscape, the 2017 Digital News Report. Among other things, it predicts that, “a mix of storytelling, product discovery, direct purchase, and customer service is seen as the likely path ahead for chatbots; making consumer engagement possible at a much wider scale than could have been achieved before.”

Innovations such as these – together with devices such as the Amazon Echo and Dot – throw down the gauntlet to marketers, who now have to compete for the attention of machines and AI assistants, rather than impressionable humans.

One business leader suggests that this is forcing some organisations to get out of marketing entirely. Josh Valman, young CEO of rapid prototyping outfit RPD International, says: “It’s a problem that has particularly come up for one client of ours, which is a British airline. They’ve basically said, ‘Bollocks to marketing! There’s no point anymore.’ They’ve said, ‘We need to work out how these algorithms work and find better ways of winning the algorithm game. Because we’re never going to be the cheapest airline, we need to find other ways of getting to the top of the list’.”

Transforming markets

AI, automation, and robotics are transforming nearly every sector of the economy. (See my separate report for an in-depth look at disruption across a variety of different industries.) Healthcare is just one of these. Speaking at the World Economic Forum 2017 in Davos, Microsoft CEO Satya Nadella described AI as a “transformative force”, which will help oncologists and radiologists “use cutting-edge object recognition technology to not only do early detection of tumours, but also to predict tumour growth so that the right regimen can be applied”.

Nadella intends to infuse every part of Microsoft’s portfolio with AI. He shared the story of how one of his employees has designed smart glasses that use object-recognition technology as an aid for visually impaired people. Innovations such as this will be a boon in countless ways.

IBM is also refocusing on the technology. The enterprise services giant made its Watson AI supercomputer available as a cloud service in 2016, enabling robots to have natural language conversations with people, enabled by vast datasets that no human being could rival. Speaking at Davos 2017, IBM Chair and CEO Virginia Rometty said that IBM is placing its “big bet” on AI – or what it calls ‘cognitive computing’. “The reason is that people would be so overwhelmed with information, it would be impossible for any of us to internalise it, to use it to whatever its full value could be. But if you could, you could solve problems that are not yet solvable.”

The enhanced enterprise

Google’s Sundar Pichai

The enterprise computing landscape is being transformed by this idea, with the technology increasingly being built into business applications, such as CRM, HR, and finance and accounting. For example, Salesforce.com debuted its Einstein platform in October 2016, opening up a world of smart image identification, marketing automation, predictive lead scoring, automated audience segmentation, personalisation, predictive analytics, and more.

So any organisation’s ability to deploy Einstein will – appropriately enough – be relative, in that it will depend on their ability to gather, store, and process enough high-quality data to make the application of AI and machine-learning meaningful.

AI is also being built into the fabric of Google itself, with CEO Sundar Pichai believing that it promises interactions that are more “natural, intuitive, and intelligent”. Voice already accounts for 20 per cent of all Web searches in the US, according to recent figures.

Speed vs. regulation

So: a transformed landscape, in which organisations of any size can innovate on a level playing field. But will people buy in to these technologies? The demand is certainly there. A recent report by Computing found that AI tops the new technology wish list for IT leaders and strategists in medium to large organisations, above big data analytics and Internet of Things applications.

But might customers be rushing headlong into applying a technology that’s still in its infancy? A 2016 global survey of C-level executives by consultancy Avanade found 77 per cent admitting that they have given little thought to the ethical implications of smart applications and devices – a global attitude of ‘invest first, ask questions later’.

This suggests a tactical mindset in many businesses, not a strategic one. But the fallout from the Tay experiment should be foremost in customers’ minds. Microsoft’s chatbot was launched on Twitter in 2016, where it learned hate speech from internet trolls within 24 hours of its debut.

Nadella called these incidents “attacks”, but the fact is that Tay simply failed to understand the nuances of human communication in a society where people are free to ask a robot whatever they want, subject to UK, US, and European laws. (Imagine if Tay hadn’t debuted on Twitter, but on the customer help desk of a government department or a multinational brand.)

Tay’s Chinese chatbot counterpart, Microsoft Xiaoice, didn’t encounter the same problems when it launched online. This reveals another inconvenient truth: AI isn’t a form of neutral intelligence, as many imagine; it is often deeply influenced by human society, training data, and local cultural differences.

Rise of the eurobot

So how can the risks of this transformative technology be minimised? In 2017, the European Union announced its intention to regulate the markets for robots and AI, in an attempt to ensure that the technologies’ benefits remain targeted at a fairer society. But Brexit and the rise in anti-European sentiment in the UK, US, and even parts of Europe itself, may threaten these ‘eurobot’ ambitions.

Without regulation and oversight, it stands to reason that any platform-wide application of AI in a world in which Amazon, Google, Apple, Microsoft, and others, compete not just for sales, but also for customer loyalty and partner rewards, risks opening up a world of antitrust behaviour on an epic scale.

“Alexa! Buy me some coffee!”, “OK Google, order pizza.”, and “Hey Siri, read me the news!” are innocent requests. But which coffee, whose pizza, and which news organisation? Ask your Amazon Echo to book you a flight to New York and two nights in a Midtown hotel, and then consider why it has suggested one airline and one hotel group above all the others. This is why transparency and trust will be essential in a world where embedded intelligence helps people to make decisions about what to buy, what messages to listen to, and what information they need – or it makes those decisions for them.

Three principles of adoption

So how to find the right path through the AI minefield? IBM’s Rometty believes that organisations need to adopt three principles when it comes to AI. She says, “One is understanding the purpose of when you use these technologies. “For us, the reason we call it ‘cognitive’ computing rather than ‘AI’ is that it is augmenting human intelligence – it will not be ‘Man or machine’. Whether it’s doctors, lawyers, call centre workers, it’s a symbiotic relationship conversing with this technology. So our purpose is to augment and to really be in service of what humans do.

“The second is, industry domain really matters. Every industry has its own data to bring to this and to unlock the value of decades of data combined with this. So these systems will be most effective when they are trained with domain knowledge. And the last thing is the business model. For any company, new or old, you’ve accumulated data. That is your asset… and that [principle] applies to how these systems are trained.”

Microsoft’s Nadella agrees that AI should be seen as a strategic complement to human intelligence, and not as a replacement for it. He says, “That’s a design choice. You can come at it from the point of view that replacement is the goal, but in our case it’s augmentation.”

But the problem is that many client organisations see AI, machine learning, robotics, and automation as simple replacements for human workers, allowing them to slash internal costs and leave replicable processes running 24 hours a day.

Last year Dr Anders Sandberg of Oxford University’s Future of Humanity Institute predicted that 47 per of all jobs will be automated, adding, “if you can describe your job, then it can – and will – be automated”. Put another way, the more our jobs are governed by rules, time sheets, checklists, and spreadsheets, the more we are collaborating in the process of being replaced by machines. Every customer service or bank employee that reads a series of slides to you off a computer screen may as well be a robot – no matter how skilled, qualified, and likeable they may be.

The public sector

And this mindset doesn’t just affect private sector enterprises, such as banks, law firms, and contact centres. A recent report by right-wing think tank Reform, ‘Work in Progress: Towards a Leaner, Smarter Public Sector Workforce’, claims that AI, robots, and automation will sweep aside 250,000 public sector jobs in the UK alone – including many teachers, doctors, nurses, and administrators. More, Reform suggests that this will be a good thing.

The technologies will arrive like Uber in the public sector, says the think tank, creating a ‘gig economy’ in which human workers compete via reverse auction to offer their services at the lowest price, while robots run the machineries of central government, along with local health and education.

The report is binary, simplistic, and ideology driven, reflecting a world in which all the focus is on cost, rather than social value, human benefit, or risk. Yet its headline findings may appeal to organisations that relish the (apparent) prospect of easy solutions and sweeping efficiency drives.

Is AI the new outsourcing?

But all of this should ring some familiar bells for innovators and technology/business strategists. A decade ago, offshore outsourcing (‘offshoring’) promised easy help desk solutions at dramatically lower cost, before disastrous customer feedback forced many organisations to repatriate those functions.

Damage to corporate reputations wasn’t part of the original equation when organisations rushed to send their call centres to India, Vietnam, or the Philippines; but it should have been. That’s obvious with hindsight, so why didn’t anyone ask the right questions at the time, such as: “Is this a good idea?”, “What if people don’t like it?”, “Do offshore agents have enough UK knowledge?”, and “What message does this send our customers?”

With a similar rush towards AI and automation today, there’s a risk that what AI technology companies aim to provide and what some enterprises believe they’re getting are completely different things. This pushes AI’s design ethos and its underlying ethics into the spotlight. Or at least, it should do.

A problem of design

The ethical dimension of AI must be an upfront strategic consideration, and never an afterthought. Joichi Ito is head of MIT’s Media Lab, where he works with the next generation of technologists. Describing some of his own students as “oddballs”, he admits that serious problems can arise with AI at the design stage.

Joining IBM’s Rometty and Microsoft’s Nadella onstage at Davos, he said, “I think people who are focused on computer science and AI tend to be the ones that don’t like the messiness of the real world. They like the control of the computer. The like to be methodical and think in numbers. You don’t get the traditional philosophy and liberal arts types.” But human society isn’t binary; it’s complex, emotional, nuanced, and sometimes irrational, biased, or prejudiced.

Ito admitted that problems such as these can be perpetuated, rather than solved, by coders: “The way you get into computers is because your friends are into computers, which is generally white men. And so if you look at the demographic across Silicon Valley, you see a lot of white men.

“One of our researchers is an African American woman, and she discovered that in the core libraries for facial recognition, dark [sic] faces didn’t show up. So if you’re an African American person and you get in front of it, it won’t recognise your face.”

You read that correctly: coders designed a racist AI, not because they set out to do so, but because of the lack of diversity in their own closed, inward-facing group. Problems such as this are surprisingly widespread, and are discussed in this separate report.

Ito explained, “One of the risks that we have in this lack of diversity of engineers is that it’s not intuitive which questions you should be asking, and even if you have design guidelines some of this stuff is a field decision. So one thing we need to think about is that when the people who are actually doing the work create the tools, you get much better tools. And we don’t have that yet. AI is still somewhat of a bespoke art.

“Instead of creating a solution, you need to integrate the lawyers and the ethicists and the customers to get a more intuitive understanding of what you need.”

The pros and cons

UK-RAS, the umbrella organisation for UK robotics/AI research and investment, produced a 2017 white paper on AI. It sets out both the opportunities and the risks of what it describes as a transformative, trillion-dollar technology, the future of which extends into augmented intelligence and quantum computing.

On the one hand, the authors note: “[AI] applications can replace costly human labour and create new potential applications and work along with/for humans to achieve better service standards. […] It is certain that AI will play a major role in our future life. As the availability of information around us grows, humans will rely more and more on AI systems to live, to work, and to entertain. […] AI can achieve impressive results in recognising images or translating speech.”

But on the other, they add: “When the system has to deal with new situations when limited training data is available, the model often fails. […] Current AI systems are still missing [the human] level of abstraction and generalisability. […] Most current AI systems can be easily fooled, which is a problem that affects almost all machine learning techniques.

“Deep neural networks have millions of parameters and to understand why the network provides good or bad results becomes impossible. Trained models are often not interpretable. Consequently, most researchers use current AI approaches as a black box.”

That last quote is telling: researchers are saying that some AI systems are already so complex that it is impossible for even their designers to say how or why a decision was made.

Conclusions

AI will benefit human beings in countless ways and help many of us to innovate and do our jobs better. It may help mankind to cure diseases and uncover new intelligence in research and development. But MIT’s Ito is right: AI is never entirely ‘artificial’, but often a simple expression of the belief systems of its human designers, coders – and customers.

The first things to be learned by machines or automated in a robotic world aren’t repetitive, replicable tasks, but leaders’ assumptions about the societies in which we live, or the markets in which their organisations operate.

For example, imagine a government that believes in citizen surveillance. Now picture an enterprise software provider telling ministers how AI can uncover hidden patterns of behaviour. How long will it be before a coder is instructed to write algorithms to root out supposed threats to society? It’s probably happening right now.

Surely that’s a good thing?, you say. But what if the base assumptions are wrong, or the data is incorrect? And what if the client’s rules are biased against certain groups of people, or against opposing political views? Suddenly you’re automating an  political viewpoint, not searching for answers at all.

Those decisions may then be reinforced by software designers who – in many cases – understand the on/off world of computers much better than they understand complex human beings. As a result, any false assumptions, biases, or prejudices (together with any bad or incomplete data) can be cast into algorithms that spread worldwide. This is neither an alarmist statement nor some paranoid liberal hypothesis, as my separate report explains.

These are the real ethical dimensions of AI. The challenge isn’t in the applications themselves – many of which will be transformative and positive – it is in the thought processes that occurred before the coders were brought in, combined with the mindset and world views of the programmers. Add to that whatever the customers believed they were getting – which as we’ve seen, may be completely different to what the providers set out to design – and then factor in the messiness of the real world, which includes trolls trying to break the system.

Never forget that trolls are customers too.

My recommendations

Buyer beware. Innovate and get creative with AI, but think strategically – not tactically – about it, and engage the one thing that computers don’t have: common sense. Use AI to augment and complement your business, your internal data, and your hard-won human expertise, not to replace it. And if systems are being designed especially for you, then check your assumptions at the door. Think like your customer actually thinks, and not how you would like them to. And consider how the world actually works, rather than how you would like it to.

If you can’t do that, then don’t expect AI to do it for you.

.chrism

• A two-part version of this article was first published by Hack & Craft News.
• For more articles on robotics, AI, and automation, go to the Robotics Expert page.
• Further reading: the UK’s booming AI sector.

CMLogoSMALLEnquiries
07986 009109
chris@chrismiddleton.company

RSS
Follow by Email
Twitter
Visit Us
Follow Me
LinkedIn
Share

© Chris Middleton 2017