Where technology and society meet: signal amidst the noise
4 January 2019: Will ‘Abbatars’ be Waterloo for holograms?
Chris Middleton explains how a new generation of holographic displays are remixing old technologies with 21st Century applications.
With Abba set to release their first new music in 35 years this year, clamour for a reunion tour is growing. The pop legends are believed to be hitting the road again in 2019-20 – not in the flesh, but in the form of holograms or ‘Abbatars’.
While the tour is likely follow the circus model of travelling from city to city, there’s no reason why it couldn’t play simultaneously in different parts of the world, as the stars will be virtual.
Abba aren’t alone in using a mix of technologies to trip the light fantastic and thrill fans old and new: Elvis Presley, Tupac Shakur, Roy Orbison, Amy Winehouse, Maria Callas, Michael Jackson, Nat King Cole, and Whitney Houston are among the lost performers who have been resurrected onstage, via technologies such as holography, 3D projectors, and projection mapping.
In the latter case, images are accurately beamed onto moving surfaces, which are sometimes controlled by synchronised robots.
One of the best examples of projection mapping is the short film Box, made by San Francisco robotic design and engineering studio Bot & Dolly in 2013. The technology was subsequently acquired by Google’s parent, Alphabet.
The video, captured entirely in camera, blends synchronised robotics, animation, and projection mapping to create an immersive world where the real and the digital merge to extraordinary effect.
Projection mapping can be site-specific, too. One recent example is the 2018 bicentennial of the National Museum in Prague, which used the building itself as the screen. Musician Roger Waters’ solo tour of The Wall and his Us & Them show also used some of these technologies.
His former band, rock behemoths Pink Floyd, recreated the iconic prism from The Dark Side of the Moon album cover as a spinning hologram at the 2017 Their Mortal Remains exhibition at the V&A in London.
Meanwhile, a holographic Kate Moss appeared floating inside a glass pyramid at the V&A’s Alexander McQueen retrospective, Savage Beauty, in 2015, and at the show’s New York run in 2011.
The ghostly supermodel who wowed visitors in London and New York was created by projectors, image cropping (to remove framing from the source footage), keystoning, distortion, and angled glass. A life-size version first appeared at a McQueen catwalk show in Paris as far back as 2006, courtesy of VFX company, Glassworks.
But many of the things that we call ‘holograms’ aren’t, strictly speaking, anything of the sort. A hologram is a physical object that diffracts light into an image – a recording of interference patterns that contains 3D information about an object. The term ‘hologram’ can refer to both the encoded material and the image resulting from it.
However, the other methods of projecting images described in this article can still be described as ‘holographic’, because they have an optical presence and appear to have spatial/3D quality.
The particular effect used in stage shows, which many assume is a hologram, isn’t a new technology. It’s really an update of a Victorian innovation, the ‘Pepper’s Ghost’ stage illusion. This was perfected in the 1860s by scientific lecturer John H Pepper, based on a concept invented a few years earlier by engineer Henry Dircks.
The illusion relied on the reflective and refractive qualities of glass. At Pepper’s demonstration of the technique, unsuspecting theatre audiences viewed a brightly lit stage through a glass screen. The panel itself was invisible, because the auditorium was dark and the glass was angled to reflect an offstage actor, allowing him to appear and disappear in front of the audience’s eyes like a ghostly apparition.
Fast-forward to the present and, by combining existing video with computer animation, computer-controlled projectors, and invisible screens, legendary musicians like Abba, Tupac, Michael Jackson, or Elvis appear to tread the boards again – as 21st Century Pepper’s Ghosts, or phantoms of their former selves.
But the technology is not always associated with bringing the past back to life or reanimating long-lost performers. There’s no reason it can’t be used in real time as a telepresence tool.
In November 2018, students at London’s Imperial College attended the launch of a series of holographic lectures, in a partnership between Imperial’s own Edtech Lab and Toronto-based AHRT Media.
Via the latter’s holographic telepresence displays, people can be beamed into an event from a ‘capture studio’ with near-zero latency, see their fellow panel members, and interact with the audience.
The company’s ARHT Engine enables live streaming of content to one or more ARHT Holographic Displays simultaneously, with multiple playback channels in HD or 4K. Subjects can be prerecorded too, meaning that the technology has strong advertising and retail display potential.
At the launch, speakers appeared life-size onstage at the Imperial College Business School in London, despite being in Los Angeles, New York, and another venue in London at the time. They were able to field questions live from the students.
Imperial plans to use the technology to deliver lectures from remote speakers throughout the academic year.
Dr David Lefevre, director of the Edtech Lab at the Business School, said: “Introducing hologram technology to the classroom will break down the limitations of traditional teaching by creating an interactive experience that benefits both students and academics.
“Rather than replacing or reducing real-life lectures, the hologram technology will provide greater flexibility for academics by enabling them to continue teaching while travelling, ensuring consistency and quality for students. The technology will also widen the scope for Imperial to invite global leaders and influencers from industry to give talks to students, enriching the learning experience.”
But musicians are getting in on the live telepresence act too, with remote live gigs beamed to venues in the form of HD projections onto angled mylar fabric – which reflects any light beamed onto it, but is otherwise invisible to the audience.
Again, the effect is a reworking of the Pepper’s Ghost illusion. The image is first projected 45 degrees downward from the ceiling onto a floor, which reflects the image back up to the onstage screen. With the transparent sheet also angled, the images appear vertical and perpendicular to the stage – creating the illusion that there are real performers onstage.
But not all holographic effects use the Pepper’s Ghost principle. A new generation of programmable displays adds brilliant LEDs to another old technology: propellors. When the propellors spin, they become invisible, creating the illusion of images floating in the air. One company working in this space is Hypervsn.
Meanwhile, a Japanese technology called Holovect claims to use lasers that are capable of “ionising air molecules” to create images that float in the air on “pockets of photon-emitting plasma”. These images can even be touched by human hands – unlike those spinning propellors – say the inventors.
Moving, floating, interactive images in the air – without the need for augmented reality glasses? The technology exists to make this Star Wars / BladeRunner 2049 concept viable, according to its inventors – and in portable devices, too.
However, the Kickstarter-funded project has run into problems, with many backers invoking their right to a refund for non-delivery of a working product.
Just don’t call these technologies holograms: in the case of Holovect (if it exists), they’re volumetric vector images projected into modified air – in other words, projections in space.
Yet I suspect that ‘hologram’ will always be the preferred term for this type of effect, whether the images are volumetric projections, spun from LEDs, or globe-trotting Abbatars. Take a chance on these.
9 November 2018: Who lives, who dies on autonomous roads?
A new survey from MIT into the ethical challenges of autonomous vehicles reveals some troubling answers to tough questions. Chris Middleton reports.
There are 1.2 billion cars in use worldwide, and every year 1.2 million people die on the roads, meaning that one person loses his or her life for every 1,000 cars made.
Figures from the US government reveal that, in 2017, over 37,000 people perished in motor vehicle crashes in America alone. Ninety-four percent involved driver-related factors, such as distraction, alcohol, speeding, or illegal manoeuvres.
The inescapable conclusion: human drivers are the biggest danger to themselves and to other people.
So getting rid of the driver appears to be the logical answer to a massive global problem. It would also free up the 293 hours that the average American spends behind the wheel of a car every year, according to the American Automobile Association (AAA). At least, in the long run.
But a future in which people no longer own or drive cars themselves is a long way off. In the meantime, driverless vehicles will have to share the road with traditional automobiles.
More significantly, they’ll have to co-exist with vulnerable humans: people crossing busy roads, wheeling prams, sitting in wheelchairs, riding bikes, standing on street corners, and generally behaving in a messy and (perhaps) unpredictable way.
Yet despite the genuine commitment of mobility companies to make our roads safer and our air less toxic, there will come a time when autonomous cars will have to make a life or death decisions.
This gives rise to some real ethical conundrums. For example, in situations where death or injury seems unavoidable, should a computer take an action that is likely to kill the driver/passenger, or the pedestrian who walked in front of the car?
Should a driverless car swerve to hit a couple of people, rather than a larger group of bystanders? Or strike an adult instead of a child? And who might be responsible for these deaths?
When an Uber test vehicle killed 49-year-old Elaine Herzberg in March as she wheeled her bike across the freeway in Tempe, Arizona, the ethical can of worms opened by such incidents became all too apparent.
Was the safety driver responsible for not watching the road? If so, why didn’t Uber’s autonomous system identify a woman wheeling a bicycle until it was too late? And why were the test Volvo’s own safety systems, which might have prevented the accident, disengaged?
Imagine the lawsuits that would be ongoing today if the dead pedestrian had been the CEO of a Fortune 500 company, rather than a homeless woman. Or, imagine some future autonomous car electing to strike, say, a disabled person, another woman, a child, or someone from an ethnic minority, rather than a group of middle-aged white men…
In the US, researchers at the Massachusetts Institute of Technology (MIT) have looked into these problems, with a global study drawing in over two million online participants. The aim was to examine different versions of the ethical conundrum known as the ‘Trolley Problem’.
This involves scenarios in which an accident is imminent, and the driverless vehicle must choose between potentially fatal options – such as whether it’s better to swerve towards a couple of people, rather than a larger group.
The Moral Machine
To conduct the survey, the researchers designed what they called the ‘Moral Machine’, a multilingual online game in which participants could state their preferences in a series of dilemmas that autonomous vehicles might face.
Some of the questions were more intriguing and troubling than others. For instance: If it came down to an either/or choice, should the car spare the lives of law-abiding bystanders, or law-breakers – people who might be jaywalking, for example? Most respondents said it would be better to save pedestrians who have never broken the law.
In a future society where people’s reputations are governed by social ratings and app usage, such a hypothetical question could become all too real. It’s conceivable that an autonomous vehicle might be able to tell a law-abiding citizen from a serial offender.
In 2020, China – where more and more citizens use the same WeChat app to network and pay for goods – will roll out just such a compulsory ratings system (it’s already in voluntary use). So it’s possible that a future crashing car may decide to take out a criminal. How’s that for an episode of Black Mirror?
“The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to,” said Edmond Awad, post-doctoral researcher at the MIT Media Lab and lead author of a new paper outlining the results of the project. “We don’t know yet how they should do that.”
The Moral Machine compiled nearly 40 million individual responses from around the world. The researchers analysed this data as a set, but also subdivided the participants into groups defined by age, education, gender, income, and political and religious views.
The team found few significant moral differences based on these characteristics. However, they did find clusters of preferences based on cultural and geographic affiliations.
Overall, the researchers found three elements that most respondents agreed on. People generally believed in sparing the lives of: humans over other animals; the many rather than the few; and the young, rather than the old.
But it wasn’t straightforward: the degree to which respondents agreed or not with these principles varied among different groups and countries. For example, MIT found a less pronounced tendency to favour young people in some parts of Asia, where many cultures honour age and experience over youth.
Conversely, respondents in southern countries had a stronger preference for sparing young people over the old, said MIT.
“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now,” says the report.
“Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”
This is a key point when individual countries, such as the US and China, are locked in a race for dominance of driverless transport. A recent AAA research found public support for driverless technologies waning in the US, in the wake of the Uber and Tesla accidents.
“The question is whether these differences in preferences will matter in terms of people’s adoption of the new technology when [vehicles] employ a specific rule,” continued Awad. “What we have tried to do in this project, and what I would hope becomes more common, is to create public engagement in these sorts of decisions.”
This is a timely survey, with the full results published in Nature. And some experts agree that the core debate should be about ethics and accountability, and not whose technology is best.
For example, the recent news report on Addison Lee’s plan to bring driverless taxis to London by 2021 found Cambridge Consultants’ machine learning expert Dr Sally Epstein slamming the focus on technology over ethics and transparency.
She said, “When fully autonomous vehicles do finally arrive, explaining how their decisions are made, particularly following accidents, will be much more important than any statistical proof that they experience fewer accidents than with humans at the wheel.”
But while the MIT report itself is fascinating, insightful, and useful, it suffers from boiling down an important debate to a set of binary options. This risks reducing ethics themselves to either/or answers to received, utilitarian questions.
What about option three? What if neither option is acceptable? And who questions the questioner? After all, coding the instruction ‘Kill a criminal rather than a law-abiding citizen’ into an AI system would itself present a moral hazard, even if it is in response to a majority view.
That criminal might be a good person who made one mistake, after suffering a lifetime of hardship and abuse, while the law-abiding citizen might be a terrible individual who has contributed nothing to society.
The point is this: Asking a machine to decide who lives and who dies shouldn’t be reduced to a simple set of binary options in this way, like a switch in a microprocessor.
At present, there’s little evidence – outside of China, at least – that consumers actually want the mass introduction of autonomous transport, despite the problems it may solve in the long term.
Connected, smart, electric vehicles with driver-assistance systems, yes. But mass autonomy? Vendors need to do far more to convince citizens of the benefits – especially in the US, where the ‘lone driver out on the open road’ is core to the American Dream.
Of course, others may argue that that is the real problem.
What do you think?
19 February 2018: Crypto-mining: Why gamers and IoT users should worry about NVIDIA’s stock price
Chris Middleton explains why gamblers’ rush to stockpile GPUs to mine cryptocurrencies and run blockchains is creating distortions in the real world. At the centre of the storm is gaming hardware giant, NVIDIA.
The connection between blockchain and cryptocurrencies on the one hand, and computer gaming hardware on the other, might not be obvious. Nor the connection with AI, drones, robotics, and the IoT. But a big clue as to why they are all deeply connected can be found in NVIDIA’s recent financials.
Earlier this month, the Santa Clara, CA-based graphics hardware company reported annual revenues of $9.71 billion, a year-on-year increase of 41 per cent ($2.8 billion). Q4 revenue alone was up 34 per cent year on year, at $2.91 billion, with gross margins of 60 per cent, giving the company a record quarterly profit of over a billion dollars.
Within those Q4 figures, revenues from Graphics Processing Unit (GPU) sales were $2.46 billion, up by one-third on the same quarter last year. The message was clear: the GPU trade is booming, and this stands at the core of NVIDIA’s record-setting results.
For a company in the gaming, data centre, and (increasingly) AI business, that’s hardly surprising, and CFO Colette Kress was keen to credit gamers and the Christmas season for the figures. On the face of it, it seemed like a watertight explanation. But was it?
The GPU arms race
Two things are certain. One, retail GPU prices are going through the roof for high-end models, reversing the standard IT trend of commoditising over time. And two, NVIDIA is a stock market darling. Anyone investing on January 1 2016 would have seen the value of a single share rise from $30 to $246 today.
Which is where things get interesting: that growth curve is remarkably similar to the graph for the market capitalisation of the cryptocurrency sector in the same timeframe. It’s not a coincidence.
GPUs were designed to accelerate the creation of onscreen images. That’s a mathematically intensive, specialised task that runs better in dedicated, optimised hardware. But this high-speed number-crunching ability is why GPUs are now used to accelerate performance in many types of device and facility, including embedded systems, smartphones, and data centres, alongside PCs and gaming consoles.
It’s also why GPUs are increasingly important in IoT applications such as robotics, drones, and AI, where speed and responsiveness are crucial.
This explains why GPUs have become critical factors in another boom area: blockchain and cryptocurrency mining – the process of contributing a computer’s processing cycles to running distributed ledgers and cryptocurrency networks.
Hype-train jumpers have been stockpiling high-end GPUs, stripping the retail and B2B supply market of stocks in their quest to build high-speed domestic rigs to help run these networks. This has pushed prices through the roof for all GPU applications, depriving NVIDIA’s traditional customers of essential hardware. A form of hardware stock bubble, in fact.
In turn, this has created what some stock market analysts are calling a ticking timebomb for NVIDIA, which has gone out of its way to play down the impact of mining on its results. In theory, the faster the processor, the better and more profitable the result from cracking the crypto code – although evidence suggests that those days may be passing.
Indeed, many now question why some part-time gamblers are mining cryptocurrencies for profit at home when the hardware and electricity costs alone are slashing their margins to the bone. Some wannabe crypto-miners may simply have bought themselves a room full of expensive, unopened gaming hardware.
Others may flock to the cloud and ride the fortunes of the crypto market from a safe distance by letting remote data centres handle the loads (aka cloud mining).
And that’s the time bomb for NVIDIA, claim some stock market watchers: a retail market starved of stock, a massive aftermarket of unused GPUs (that may one day be worthless), a wave of hype attracting gamblers and amateur number-crunchers, and a vendor whose own financials may, some analysts allege, be tied to the tracks of the cryptocurrency rollercoaster.
In a hyperconnected world, the creation of any new type of network – and networked behaviour – is freighted with risk. So it will be interesting to see what happens when NVIDIA releases its long-anticipated Turing GPU, which is designed specifically for currency mining applications.
But is the new hardware evidence of NVIDIA making a long-term bet on crypto? Or is it a hedge against its mainstream business, or even a safety valve to take the heat out of its own internal economy? Either way, NVIDIA’s other customers in the data centre, AI, robotics, and gaming sectors will be crossing their fingers and hoping that the GPU titan doesn’t crash.
But what’s the underlying lesson here, the hidden seam of code? It’s this: despite all the hype around blockchain and cryptocurrencies – and what some see as computers’ ability to conjure real money out of virtual air – there is always a cost in the physical world.
In other words, everything virtual needs to exist inside something physical, something that consumes power, which costs money to ship and build, and which is governed by the normal laws of physics.
Anything that happens in the virtual world has an effect in the physical world – and in this case, as the crypto market has increased in value, so has the cost of the hardware that runs it. That cost is in scarcity, electricity, heat, energy, carbon, human labour, manufacturing and supply chain processes, and shipping GPUs into gamblers’ living rooms in piles of unopened boxes.
Put another way, what’s the cost per watt of mining? Without that data, it’s impossible to calculate a fair value – or any meaningful value – for a cryptocurrency [thanks to Tom Hughes for that observation – CM].
Yes, blockchain will help to create a new data commons and a consent-based transaction system; yes, decentralised, personal ownership of data is a good thing; and yes, the emergence of cryptocurrencies may well constitute the beginnings of a new financial order. But this is the new relativity: the virtual and the physical worlds will always be equivalent – equal and opposite in reactive terms.
It’s just elementary physics; it’s mass and energy all over again. And without any insight into the real cost of processing blockchains and cryptocurrencies, it’s hope over gold.
The irony is that many wannabe crypto-miners would have done much better financially by buying shares in NVIDIA.
• This article was first published on Internet of Business.
December 2017: That was the year that was
It was great to be asked by so many organisations – including BBC radio and TV, ITN, schools, and several event, conference, and exhibition organisers – to talk about the social and economic impacts of robotics and AI this year, and to help people prepare for a range of possible futures.
And I’ve written a great deal about these things too, because in a world of hype, scare stories, and sensationalist headlines, I believe it’s important to focus on the issues that really affect people.
I learned so much from everyone I met in 2017 – some extraordinary individuals among them, such as the teenager who told me that he plans to devote his life to developing safe nuclear fusion technology and “providing clean energy for all mankind”. Wow. The budding scientist was serving the coffee at an event and came over to speak to me because he wanted to meet my robot. (As ever, some of the most interesting conversations happen on the fringes, where you least expect to find them.)
2017 was also a red-letter year for Stanley Qubit, the NAO robot that I own and manage, and which has become something of a celeb. The robot has been invited to schools – such as the south London comprehensive where it helped sixth formers think about the future of skills and employment. It attended conferences and events throughout the country; shared the stage with CEOs and transhumanists (and even some lawyers); and was scheduled to appear in a couple of documentaries – not to mention an episode of the TV drama ‘Bones’ in Hollywood (which, sadly, proved impractical to organise all the way from Brighton).
The robot even made history as the first real one to appear in a dramatisation of Isaac Asimov’s ‘I, Robot’ stories, on BBC Radio 4 in the Spring. Stanley voiced some of the background robots – typecasting, I know – and the press got in a lather about a machine replacing human actors. Surreal moments like these have become everyday life for me. When the phone rings, it means a new conversation is about to start, and that’s exactly how I like it.
Happy new year to all my readers, clients, visitors, and passers-by. Get in touch: let’s make the future together.
13 December 2017: Why Black Mirror will be real for a billion people
In an October 2017 article, I made the prediction that facial recognition technologies would be defeated by criminals wearing realistic 3D-printed masks of other people’s faces. This unlikely sounding story was dismissed by some as being more like an episode of Black Mirror, Charlie Brooker’s dystopian techno-satire, than something that might happen in the real world. Yet less than a month later, The Register, CNet, and others, reported that hackers claimed to have overcome the face-recognition login of the $1k Apple iPhone X using a $150 3D mask.
The story revealed not only how fast technologies are moving, but also that the futurist’s and the satirist’s jobs are getting harder by the second. In 2017, other real-world tech stories have included: the CIA deploying AI to determine if citizens with tattoos are going to commit crimes; facial recognition technologies predicting if people are gay; and right-wing think tank Reform suggesting that robots could force teachers and doctors to compete with each other by reverse auction to work for less and less money.
Meanwhile, in China – as my facial recognition report revealed – shoppers are paying for goods with their smiles, while others are ‘beautifying’ themselves on video using skin-whitening algorithms. Any of these ideas could have appeared in a Black Mirror episode: a “dystopia that thinks it’s a utopia” as the world of the series has been described.
In 2013, Brooker told the Guardian that his series explores the side effects of the “drug” of modern technology, explaining, “The ‘black mirror’ of the title is the one you’ll find on every wall, on every desk, in the palm of every hand: the cold, shiny screen of a TV, a monitor, a smartphone.” But it is also a mirror held up to our times – with the opening logo cracking apart as reality breaks through – heralding years of bad luck, perhaps.
So, leaving aside the vexed questions of whether a British Prime Minister might ever have sex with a pig (‘The National Anthem’, 2011), or an offensive TV star might run for office (‘The Waldo Moment’, 2013) what other Black Mirror topics and technologies are just a swipe away from reality in today’s Trumpian, post-Cameron world?
The China syndrome
The story that feels most imminent is ‘Nosedive’ (2016), in which a woman’s high social rating plummets, and she finds herself exiled from the elite society that she craves membership of. Once her veneer of politeness is stripped away by her nosedive out of the rankings, she ends up locked in a room, screaming abuse at a man in the cell next door (the princess has become a troll, perhaps).
While the ‘Uberisation’ of the economy has made elements of this story real for some, that’s nothing compared to how one country is embracing the idea of awarding points for good behaviour. In 2020, China will introduce a mandatory social credit system to rate the trustworthiness of its billion-plus citizens. It is already in force on a voluntary basis.
The scheme will be all-embracing, covering citizens’ financial health, bill payments, medical records, purchases, transport use, friend networks, and more. By awarding low scores for bad behaviour, laziness, unhealthy shopping habits, and other infractions – and using AI to infer intent – China hopes that the scheme won’t just monitor people’s behaviour, but also influence it via rewards and preferential treatment.
Those with high scores will benefit from state-sanctioned loans, faster check-in at airports, prominence on dating sites, and more, while penalties for poor social rankings will include slower internet speeds, restricted access to services, travel bans, and even removal of the right to buy certain goods.
The government says the programme will forge “a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity, and the construction of judicial credibility.
“If trust is broken in one place, restrictions are imposed everywhere,” adds the policy, which will “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step”. A draconian statement that goes far beyond Brooker’s light-hearted episode.
Crime and dissent eliminated at the checkout? The ultimate commentary on a consumer economy, perhaps: the ‘gamification’ of conformity, big data meets Big Brother. Citizens will be warned off befriending people with low social scores, while some analysts have predicted the emergence of a black market for good reputations.
But the scheme begs the questions: what if the data is wrong? What if the system is biased against certain groups – ethnic minorities, LGBTQ citizens, and so on? What if people are penalised for something that isn’t their fault, and the problem snowballs – just like in the Black Mirror episode? I’ve explored the problem of machine-generated verdicts and sentences in this report, while the challenge of bias in AI-based programmes is discussed here.
Over time, some Chinese citizens risk becoming ‘non-people’ – individuals who are ‘blocked’ from their own society, a concept explored in the disturbing episode ‘White Christmas’ (2014), in which Jon Hamm – Mad Men’s Don Draper – plays a character who is deleted from the world, becoming a featureless, voiceless shadow figure in the eyes of the tech-enabled populace.
Make way for the transhuman
Of course, human beings have always been data: everything about who we are is encoded in our DNA. But this trend towards big data merging with our everyday lives suggests that the world of another episode, ‘The Entire History of You’ (2011) – in which the ‘Grain’ brain implant records every moment for instant replay – is also just around the corner.
The idea of embedding electronic devices in the human body is nothing new: transhumanists are a widespread group who believe that human beings and machines will eventually merge. Some of them experiment with tech implants – robotics expert Prof. Kevin Warwick injected a chip in his arm as far back as the mid-1990s. Transhumanists even have their own social network [http://humanityplus.org].
Meanwhile, millennials record every aspect of their lives, while many of us use wearable devices such as Fitbits, smart watches, VR headsets, and smart earbuds – the first step on a journey towards incorporating those devices into our bodies, perhaps. But that journey isn’t inevitable. Neither Google Glass nor Snapchat spectacles, which allow wearers to record whatever they see, have caught on in large numbers, partly because their wearers are seen as invading other people’s privacy. Other reasons for the lack of uptake include the reality of recording reams of useless data.
But as facial recognition systems enter our world en masse over the next few years, it is likely that these barriers will eventually be seen as blips, as we all become used to being watched (and watching) more overtly. At that point, the blocking explored in ‘White Christmas’ (which is already an option on dating apps and social platforms, of course) may become a horrifying reality.
And as virtual, augmented, and mixed-reality environments become more prevalent alongside location-based services, the number of use cases for wearables that remove the need to carry bulky hardware will rise. Projectors already exist that can turn any hard surface into a touchscreen interface, the latest Apple Watch no longer needs to be linked to a smartphone, and so on, so over time we may no longer need our tablets and phones. At that point, the journey towards integrating our devices with ourselves will gather new momentum.
Arguably, the entire history of technology itself can be seen as a journey towards higher resolutions. After all, we’ve gone from VHS to ultra high-def 4k video within a single human generation, and from crude eight-bit gaming to the type of immersive reality and photo-realism explored in the episode ‘Playtest’ (2016).
And with Moore’s Law clearing the path to ever greater storage, computing power, and processing speeds, the world depicted in the critically lauded episode ‘San Junipero’ (2016) could also become a reality – an episode that asks the question, Do you want to live forever?
What begins as a romantic, Thelma and Louise-style buddy movie is gradually revealed to be a story about two elderly women using immersive reality to relive their youth in an idealised 1980s world: the ultimate retirement home and – the story suggests – eventually a heaven on Earth. When human consciousness can be uploaded to a vast computer system, the two women can find each other again in San Junipero and live there forever.
A similar concept was explored in the episode ‘Be Right Back’ (2013), in which a replica of a woman’s dead lover is created from all the data that remains online, and in the 2017 movie ‘Marjorie Prime’, in which an ageing woman shares her life with a simulation of her long-dead husband.
Mind uploading, aka ‘whole-brain transfer’, is a theoretical concept that some transhumanists and AI experts are working towards as a means to extend life. Indeed, some see it as a logical endpoint of neural network research. In theory, there is no need to fully understand the workings of the human brain if it can simply scanned in microscopic depth and digitally replicated – perhaps 3D printed – and either stored in virtual reality, or in a robot, computer, or bio-engineered body.
But would it be the same person, or merely a facsimile? Or would it be a new consciousness entirely – one that is free to make new memories? These are intriguing questions, particularly when billionaire technologist Elon Musk believes that we may all be living in a computer simulation; merely characters in an advanced society’s Matrix-style virtual world. That belief is shared by a number of scientists today.
Either way, it’s conceivable that this may be one of the destinations in our shared journey with science and technology: not disproving that the afterlife exists, but creating it for ourselves and our loved ones. And perhaps, an unknown number years into the future, even building a new universe ourselves, stored in vast quantum processors.
And if Elon Musk is right about the universe we live in today, there may be an infinite regression of simulated universes; a multiverse of data. So does the real one even exist? Look in the Black Mirror and see.
• A new six-episode series of Black Mirror debuts on Netflix on December 29, 2017.
• For more articles on robotics, AI, and automation, go to the Robotics Expert page.
• A version of this article was first published on Hack & Craft News.
21 October 2017: AI & automation, the real political cost
A new report reveals how each constituency in the UK will be hit by AI and automation, and says that the Midlands and the North will be worst affected. The government needs to act now to prevent serious problems later, says Chris Middleton.
Headlines shout about killer robots, conferences hail the rise of AI, robotics reports pile up on boardroom desks, and politicians talk about automation, but as yet no one has formulated an adequate policy response to maximising the opportunities and minimising the risks of these technologies – least of all the government’s own AI report.
So how to force MPs to abandon the rhetoric and engage with AI more urgently? One think tank has the answer: produce a heat-map of UK automation and show politicians how their own constituencies may be affected by it. The new report, The Impact of AI on UK Constituencies – Where Will Automation Hit Hardest? has been produced by Future Advocacy, a think tank that pushes for smart policy-making in the new industrial age.
Future Advocacy sees AI as the key that will unlock the mysteries of data and usher in the next wave of automation. The organisation has applied 2017 PwC data on the impact of automation to the local jobs market in each constituency of the UK: a simple but effective strategy.
The resulting heat-map (see picture) reveals that Shadow Chancellor John McDonnell’s Hayes and Harlington constituency will by the hardest hit by automation and machine intelligence, with nearly 40 per cent of jobs at risk by 2030. Its high concentration of transport and storage jobs make it most susceptible to disruption. (The long-term impact of driverless vehicles could be massive in the years ahead: in the US, for example, truck driving is the most common job in 29 out of 50 states.)
Next on the list are two Conservative seats, Crawley (37.8 per cent of jobs at risk) and North Warwickshire (37.1 per cent), followed by two Labour heartlands, Alyn and Deeside (36.8 per cent) and Brentford and Isleworth (36.8 per cent).
The constituencies of leading political figures are certainly in the frame, says the report. Among these are Maidenhead, held by Prime Minister Theresa May (where 28.8 per cent of jobs are at risk); Twickenham, held by LibDem leader Sir Vince Cable (27.2 per cent of jobs at risk); and Islington North, held by Labour leader Jeremy Corbyn (26.2 per cent of jobs).
While Corbyn’s constituency is a long way behind May’s and Cable’s in the number of jobs that could fall to the machines, the figures still represent well over a quarter of opportunities in the area, even though 602 other constituencies face a higher risk of unemployment. Clearly, AI and automation show no respect for political boundaries.
The big picture
Zoom out from local politics to look at the UK as a whole, and a stark picture emerges: the regions that are likely to be hardest hit by automation are the Midlands and the North of England, where jobs in transport, manufacturing, and warehousing are often concentrated.
As a result, ‘one-size fits all’ policies won’t work in this fast-emerging world, says Future Advocacy: “Our analysis suggests that the unequal geographical distribution of the impact of automation deserves immediate attention by government, particularly as it is regions that have previously suffered the effects of industrial decline that are likely to be worst hit.
“It is important that the government learns the lessons that the recent history of manufacturing, mining, and similar industries in the UK have taught us. The decline of these industries in parts of the UK towards the end of the last century may have been inevitable, but it is unarguable that the transition to new job types and different industries in these areas could have been managed better.
“The consequences of the historic decline of industries such as manufacturing and coal mining in these regions have been extensively studied, and include high rates of unemployment, high prevalence of illnesses such as depression and drug/alcohol abuse, and depopulation. It is concerning that the areas that have already suffered so much from industrial decline could be hardest hit yet again.
“Even more worryingly, the speed at which job displacement to automation could potentially occur is worth highlighting. For example, while it took several decades for the 19,000 mining jobs in the whole of Warwickshire to be lost, our analysis suggests that around 20,500 jobs (or 37.1% of the total number of jobs in 2015) in North Warwickshire could be displaced by the early 2030s. The impact on individuals, families, and whole communities will be profound.”
Lower-risk constituencies tend to be those that offer more opportunities in sectors such as education and health, says Future Advocacy – confirming findings published in the RSA’s recent Age of Automation report, which noted that jobs in hospitality, leisure, medicine, healthcare, and education are most resistant to automation, because they rely most on human relationships and empathy.
The other side of the coin
Yet automation isn’t a zero sum game, as the report acknowledges. Automation may sweep aside many routine, low-skilled tasks, but it will also create new types of job. In the long term, the economic growth spurred by these new technologies may mean that the employment impact is neutral, but long before then skills, education, and training will be the real battlegrounds as people fight to retain their place in society.
The report says: “For those with just GCSE-level education or lower, the estimated potential risk of automation is as high as 46 per cent in the UK, but this falls to only around 12 per cent for those with undergraduate degrees or higher. Similarly, men may be at higher risk of job displacement by automation than women.
“The sectors with the highest estimated risk of automation are characterised by relatively high proportions of male employees and of workers with low educational attainment.”
The risk of rising economic disparity, with wealth concentrated in fewer and fewer hands, is real, says the report. Earlier this year, right wing think tank Reform suggested that a quarter of a million jobs could be swept out of the UK public sector alone, leaving teachers, doctors, nurses and care workers to compete via reverse auction for ad hoc work – a scenario that Reform suggested would be a good thing.
To avoid an AI-enabled field day for the ideologues, Future Advocacy says that the government should conduct research into alternative income and taxation models that favour a fairer distribution of wealth.
“This could include undertaking well-designed trials of a Universal Basic Income (UBI) along the lines of those currently underway in Finland, Spain, the Netherlands, and Canada. The government’s fiscal and welfare policies must be updated to ensure that wealth is not increasingly concentrated in the hands of a few commercial entities who own robots and other automated technologies.
“Ultimately, we support a taxation model that results in a fairer distribution of the wealth that these technologies will create.”
Checks and balances
The need for an economic counterbalance to the rise of the machines seems overwhelming, as society prepares itself for a more skilled, creative, flexible, and/or portfolio-based future. But whether a Conservative government would ever consider a UBI strategy must be in doubt: it’s hard to imagine right-wing newspapers ever getting behind the idea, despite its obvious good sense.
So how else should the British government engage with automation today, so that it can plan for a better, fairer, and more equal future? Future Advocacy says that it should:
• Commission and support research to assess which employees are most at risk of job displacement, including how impacts will differ by employment sector, region, age, gender, educational attainment, and socio-economic group.
• Draft a White Paper on adapting the education system to lifelong learning and maximising the opportunities of AI – a recommendation also made by techUK in its 2017 manifesto for digital renewal, by the RSA in its automation report, and by Jeremy Corbyn in his speech to the 2017 Labour conference.
Such a White Paper shouldn’t restrict itself to extolling the importance of STEM and coding skills, adds Future Advocacy, but also make detailed proposals to future-proof training in creative and interpersonal skills. More, it should support initiatives that encourage underrepresented groups to pursue AI and robotics training, including women and ethnic minorities.
• Make the AI opportunity a central pillar of the UK’s Industrial strategy and of the trade deals that the UK negotiates post-Brexit – a recommendation also made by techUK.
• Ensure that the migration policy in place post-Brexit will still allow UK-based companies and universities to attract the best AI and robotics talent from all over the world.
• Develop smart, targeted strategies to address future job displacement.
The report adds: “The importance of targeting these interventions at those most at risk cannot be overemphasised.”
Together, this report, the RSA’s Age of Automation document, and techUK’s election-themed manifesto for digital renewal are better assessments of the new world of work than the government’s much-feted AI report, published last Sunday while ministers were kneeling to pray that Brexit works.
That the government needs to look at these figures is beyond any doubt. No 10 needs to stop obsessing about what sort of society it doesn’t want (one overseen by Europe) and start thinking about what kind of economy it wants to create in the long term: a future in which it will need to be ambitious, entrepreneurial, and prepared to take risks.
If everyone in society is to benefit from the new world of work – which was surely the subtext of Brexit – then Whitehall needs to give this report urgent consideration.
2 October 2017: Does Jeremy Corbyn really want a robot tax?
Misreported by the mainstream media yet again, the Labour leader backs research saying that the UK must invest more in training its workers for an automated future, says Chris Middleton.
On 27 September, Labour leader Jeremy Corbyn used his speech to the party’s 2017 conference in Brighton to call for a new industrial and education strategy to face “the challenges of the future [that] go beyond the need to turn our backs on an economic model that has failed to invest in and upgrade our economy”.
Further education, life-long learning, and a rebranded education system are part of Corbyn’s new deal. He said: “Labour will build an education and training system from the cradle to the grave that empowers people. Not one that shackles them with debt. That’s why we will establish a National Education Service, which will include at its core free tuition for all college courses, technical, and vocational training so that no one is held back by costs and everyone has the chance to learn.”
Corbyn suggested that state-owned investment banks would be the best way to push funding to every corner of the UK. With several retail and investment banks threatening to leave the UK in the run-up to Brexit, the idea may be timelier than his critics realise.
At least one international study supports the need for the UK to invest more from the centre. The Global Innovation Index (GII), published annually by WIPO et al, backs Corbyn’s view that the UK is investing too little in both its education system and its central infrastructure. The UK is a long way down the rankings in several key areas, says the report: 102nd out of 127 countries in capital formation around critical infrastructure programmes; 22nd for education quality; 25th for education expenditure as a percentage of GDP, and just 46th in tertiary enrolment.
Rise of the robots
Corbyn then turned his attention to automation, robotics, and AI, which the government has identified as being among the ‘eight great technologies’ that are critical to economic prosperity. He said: “The tide of automation and technological change means re-training and management of the workforce must be centre-stage in the coming years.
“We need urgently to face the challenge of automation: robotics that could make so much of contemporary work redundant. That is a threat in the hands of the greedy, but it’s a huge opportunity if it’s managed in the interests of society as a whole.
“We won’t reap the full rewards of these great technological advances if they’re monopolised to pile up profits for a few. But if they’re publicly managed – to share the benefits – they can be the gateway for a new settlement between work and leisure. A springboard for expanded creativity and culture.”
The phrase ‘publicly managed’ appears to refer to his proposed reset of the UK’s banking and investment systems: ethical investment, in other words, and a new form of mutually beneficial capitalism.
Far from being an old red rag for conservative bulls to charge at, the policy echoes the findings of several progressive reports, such as the RSA’s recent The Age of Automation, which says that the burden of tax must shift from labour to capital in order to counter any socially destructive impacts of mass automation. Other robotics and AI studies have made similar recommendations.
Indeed, this is the flip side of the coin tossed by Reform earlier this year. The right-wing think tank – favoured by Theresa May and health secretary Jeremy Hunt – suggested that mass automation would allow the public sector alone to shed 250,000 workers and force doctors, teachers, and nurses to compete for gig economy work via reverse auction (bid to work for less money).
So it is hard to argue with Corbyn’s claim that automation may be used to maximise benefits for the few when that scenario is spelled out in conservative analysts’ own reports, which even suggest that it would be a good thing.
And yet the papers do argue with the Labour leader, as is traditional in the UK’s politicised media landscape. “Return of the Luddites!”, shouted the Telegraph, ignoring Corbyn’s clearly stated support for applying automation in a way that benefits all of society. The implication of the Telegraph’s headline is perhaps that modernity demands that only the few should benefit.
En masse, the UK’s conservative press pushed the message that Corbyn is calling for a ‘robot tax’ to penalise companies that automate their workforces; indeed, it was suggested that Labour’s pre-speech briefings spun this line to the media. That may be true, but what Corbyn actually said on tax was very different:
“When I’ve met business groups, I’ve been frank that we will invest in the education and skills of the workforce and we will invest in better infrastructure from energy to digital, but we are going to ask big business to pay a bit more tax.”
So there are three explanations for the media claim that Corbyn is calling for a robot tax. One, he is, but neglected to say so in a detailed, 6,000-word speech; two, Labour’s own spin doctors don’t understand the distinction between a robot tax and raising corporation tax to pay for life-long learning; and three, the conservative press have conflated two unrelated concepts in an attempt to alarm business leaders. Any of these may be true.
But whichever is the case, the difference between raising corporation tax, and taxing a robot as if it were an employee – an idea supported by Bill Gates, the European Union, and others – should be obvious. One shifts the onus onto the robot’s manufacturer, and the other onto their business customers. Conflating the two to suggest that Corbyn sees robots as the enemy is both unsupported by the text of his speech, and unhelpful to this debate.
Business leaders respond
In the event, the UK’s business leaders responded better to Corbyn’s speech than the Tory press did, which suggests that Corbyn is ringing bells in some City towers, at least. The CBI said that it, “shares much of Labour’s vision for a fairer society underpinned by good business. But without an open dialogue there is a risk that some of their policies could knock us into reverse gear.
“The focus on research & development, infrastructure and education is encouraging, but artificially hiking wages and changing corporation tax could be investment dampeners, not drivers. Labour is certainly laying out a new way forward and we urge them to iron out inconsistent messages – especially the relationship between state and industry – and clarify policies that are sometimes hard to see delivered and paid for.
“Labour can be reassured that they do have a significant joint agenda with businesses of all sizes, which stand ready to ensure that the opposition’s ideas deliver sustainable prosperity across the UK, and avoid the threat of an era of regressive industrial strategy.”
So what did the tech industry think of Corbyn’s speech?
Earlier this year, techUK, the organisation for the UK’s IT innovators, called for a new industrial strategy in its own manifesto for change, along with a programme of life-long learning. On the face of it, therefore, Corbyn is simply backing their ideas. But in its own response to the Labour leader’s speech, techUK was oddly ambivalent.
Deputy CEO Antony Walker said: “Jeremy Corbyn is right that there are huge benefits to society that can come through automation. From AI that can help improve diagnosis rates in the NHS to machine learning that can reduce time wasted on form filling in businesses. But if the UK wants to lead the way in harnessing this power we must be careful not to undermine the investment in digital technologies that will drive productivity and economic growth. [In fact, Corbyn simply said he wants broader, more ethical investment.]
“All political parties should be thinking about how we handle the challenges to come from accelerated automation. But it is too soon to be making assumptions about its impact on either jobs or the tax base. Care needs to be taken not to put a tax on productivity growth that is so fundamental to raising living standards.
“The challenges posed by automation cannot be solved by a short-term fix. Automation can and will lead to the creation of new jobs and industries. What is important is that UK workers have the skills and education needed to take advantage of those opportunities. That is why Labour is right to highlight the importance of improving investment in education and lifelong learning. This approach must take priority over-relying on taxation to slow the pace of change.”
In other words: “Nice sentiment, Jeremy, but don’t ask us to pay more tax,” says UK IT.
But are Corbyn’s critics right to say that raising corporation tax deters growth and innovation? Not according to the source of the definitive ‘tax cuts equal growth’ policy. “In reality, there’s no evidence that a tax cut now would spur growth,” says Bruce Bartlett, Republican domestic policy adviser to President Ronald Reagan in the 1980s.
In an opinion piece for the Washington Post, Bartlett now describes his own policy as “wishful thinking” and a “Republican tax myth”, and explains that much stronger growth followed President Clinton’s tax increases in the 1990s, and President Obama’s this century, than occurred during lower tax regimes.
But back in the UK, one thing is certain: Whitehall must improve education and infrastructure spending in global terms: no statistics say otherwise. The UK also needs to triple its R&D investments to remain competitive, post Brexit.
Meanwhile, if business is to be the sole beneficiary of both mass automation and a skilled workforce – while the gig economy leaves everyone else scrabbling for micro-payments – then raising corporation tax to help better educate the populace and give them a lifetime of opportunities is a sensible and just idea.
The alternative is raising taxes for everyone, including those who will be hardest hit by the ongoing application of 19th Century industrial thinking to 21st Century technology. Far from presenting an outdated vision, Corbyn is saying that the old strategy is no longer fit for purpose.
As I’ve said before, the core challenge in robotics and automation is actually very simple: what developers believe they are creating (assistive technologies for a better society, aka man plus machine), and what most customers think they are buying (a means to slash costs, remove workers, and force up productivity, aka man vs machine), are two completely different things. More, Whitehall and the Bank of England appear to see the benefits as coming from the latter camp, as my report for diginomica from UK Robotics Week 2017 revealed.
And despite the tendency of the digital world to enable flat, peer-to-peer, collaborative processes, the UK’s policymakers are trying to shoehorn the technologies into a top-down industrial strategy that was forged in an era of cotton mills and workhouses. Far from following that trend, Jeremy Corbyn is trying to break it. That’s a forward-looking policy, not a Luddite view.
But in the long run, he may have nothing to worry about when it comes to British workers being swept aside by an unholy alliance of machines and men in stove pipe hats: the UK is investing a pitiful amount of money in kickstarting the sector domestically, despite having identified it as being critical to future prosperity.
With a total central investment of just £300 million between 2016 and 2020, the UK is nowhere in global terms; Japan, for example, is investing $161 billion to create a “super-smart society”, while China is automating faster than any other country. Britain’s uptake of the technologies lags a long way behind most Western economies, such as Germany, France, and Sweden.
And that’s not all: according to the Science and Technology Select Committee (quoted in the RSA’s automation report), 80 per cent of the UK’s investment in robotics comes directly from the EU. With no strategy in place to replace that funding, the UK’s chance to be a world leader may already have gone.
• This article was first published on diginomica.
25 September 2017: What’s Uber really driving at?
Chris Middleton zeroes in on the real story behind the headlines and popular protests about Uber’s lost licence to trade in London.
On 22 September, Transport for London (TfL) refused to renew Uber’s private hire licence on the streets of the capital, saying that it had taken the decision on the grounds of “public safety and security implications”.
What “security implications” referred to is unclear, but the move was merely the latest in a series of legal tussles for Uber, which has battled authorities in cities throughout the world. Many of these have centred on a clash between what is billed as a ride-sharing app, but is used by millions as a taxi service with non-vetted (if user-reviewed) drivers.
Arguably, among Uber’s main achievements are the sidestepping of regulations that are designed to protect the public, while doing so with customer support and avoiding EU sales tax. That’s not a situation that authorities were going to rubber-stamp forever.
In Paris in 2014 and 2015, taxi company protests and attacks on Uber drivers led to the UberPop app being suspended there, but in many other cities, such as New York, the service remains popular.
TfL also cited Uber’s use of its Greyball technology, which has helped the company to evade investigators, according to a New York Times report earlier this year.
Yes we Khan?
Mayor of London Sadiq Khan said in a statement: “I fully support TfL’s decision. It would be wrong if TfL continued to license Uber if there is any way that this could pose a threat to Londoners’ safety and security.” General secretary of the Licensed Taxi Drivers’ Association, Steve McNamara added: “This immoral company has no place on London’s streets.”
Some have criticised Khan for a retrograde step, but he had little choice but to back his transport chiefs. Khan’s role shouldn’t include rubber-stamping the wishes of a US corporation, however popular its services may be with citizens. However, some have argued that TfL’s decision was itself political: the result of a concerted campaign by the Licensed Taxi Drivers’ Association, supported by – among others – Nigel Farage.
In London, Uber is used by up to 3.5 million passengers or registered users, the company claims, while the apps provide on-demand, flexible employment for 40,000 drivers, it says. However, Uber’s figures may need greater scrutiny, as 3.5 million is nearly half of the resident population of the capital.
The effective ban comes at a challenging time for both the capital and Brexit UK, as the country struggles to attract inward investment, maintain consumer confidence, and ready itself for the days when increasing numbers of citizens will rely on the gig economy to supplement their income as automation increases.
The company said it will appeal against TfL’s “Luddite decision”, and added, “Far from being open, London is closed to innovative companies”.
However, while popular in 600 cities worldwide (according to Uber’s own estimates), the company’s main innovation is mobilising other people’s assets to build a market presence that will ultimately dispense of their services.
Uber is among the many companies, such as Waymo (currently suing Uber for patent infringement and misappropriation of Google assets), Tesla, Ford, Mercedes, and GM, to be developing autonomous vehicle technologies. Uber’s long-term aim is almost certainly to build the infrastructure to summon driverless cars with a click. Much of that infrastructure exists already.
Any public transport service that’s based on escalating private car ownership in cities can’t be held up as a vision of the future. So accusing its opponents of being Luddites is wide of the mark – and Uber knows it. Its drivers are a medium-term means to an automated end.
Despite this, Uber’s general manager in London, Tom Elvidge, told the BBC that he was putting the company’s 40,000 drivers front and centre of the protest: “To defend the livelihoods of all those drivers, and the consumer choice of millions of Londoners who use our app, we intend to immediately challenge this in the courts.”
The gig conundrum
Many of those drivers rely on Uber for ad hoc, gig-economy income, but not exclusively; it’s not the case that all are now out of a job. Indeed, Uber’s insistence that they are is interesting: up until now, it has always claimed its drivers are self-employed.
A more persuasive argument in Uber’s favour is that it sweeps aside a bias that some have observed, anecdotally, in London’s taxi trade: a bias against ethnic minorities, both as drivers and passengers. And Uber will take you south of the river.
That said, some Uber workers have taken to social media themselves to criticise the company for, among other things, flooding the market with drivers to force costs down, and taking up to 30 per cent commissions. But at present, the howls of protest at TfL’s move are louder than Uber’s critics, which has weakened Khan’s standing in the capital.
Uber is seen as synonymous with the gig economy, so it’s worth considering its effect. Uber is forcing down the cost of services to end-users – not just within its own business, but also in its sector. Customers love it, just as they love free music and movies, but (understandably) they ignore the long-term repercussions:
While, over time, the opportunities to earn money from the gig economy increase – a good thing – the ability to earn a living from them dissipates. In this sense, the gig economy is emerging as a broad-spectrum scrabble for micro-payments, while automated businesses, such as Uber, rake in escalating profits. Welcome to the new 19th Century.
The music industry is the classic example. The network connects a musician with listeners worldwide – another good thing – while stuffing the channel with noisy middlemen and advertisers, leaving our globally-connected artist with an income of cents, if she’s lucky. This is because, like on-demand transport and information, music has become commoditised via the network effect. Any musician can earn more from busking for one hour in a small town centre than she can from 100,000 streams in the global village, Spotify.
Some policymakers are well aware of this phenomenon. For example, right wing thinktank Reform suggested earlier this year that mass automation in the UK public sector should be backed by gig-economy workers competing by reverse auction for ad hoc bookings (bidding to work for less and less money). In this case, the workers in Reform’s sights were teachers, doctors, and nurses. In this sense, the gig economy could be seen as a wholesale transfer of income and services to the many, but profits and power to the automated few. That sits uncomfortably in a peer-to-peer world.
To the future, then
Which brings us back to Uber. Whether TfL’s move is enforceable in the virtual world remains to be seen: some previous examples of bans or licence rejections have seen drivers continue to pick up work. Uber itself has long favoured a Wild West approach to business whenever it rides into town.
The challenge for legislators is that everyone loves a cowboy. TfL can refuse a licence, but it can’t mass-delete a popular app – unless app stores cease to approve it. Many Uber users now see the service as a consumer right, and as the most convenient, safest, low-cost transport solution in London – even if TfL believes their safety to be low on Uber’s list of priorities.
But if Uber is sincere in its determination to build a collaborative, gig-economy, on-demand future – rather than seizing market share at any cost – then it must learn how to collaborate itself.
Uber CEO Dara Khosrowshahi acknowledged this when he admitted, “The truth is that there is a high cost to a bad reputation”. He said that he would work towards collaborating with authorities to make the company better: a welcome change of tone.
Put another way, a great app, a disruptive technology, and a popular service don’t excuse being a bad corporate citizen. Whether serial law-breaker or popular disruptor, it’s time for Uber to grow up. To criticise it isn’t to be a Luddite. The truth is, perhaps, much simpler: it had a licence and blew it.
But at the heart of this story is a good idea and a popular service. So let’s hope that a better version comes along soon, either via an improved, more contrite Uber, a competitor, or Wired’s suggestion of a new employee-owned service.
• A Sky Data poll shows that more than half of people in the UK support Uber operating in the country. Meanwhile, the ride-share company has launched an online petition at Change.org, urging users to help save the service in London. At the time of writing, it has over 700,000 supporters. However, a random sample of responses to the petition reveals that a surprising number are signing from outside the UK – according to their stated locations – while the media is portraying it as a London protest. In an era of climate change, food banks, Trump, and nuclear Armageddon, it’s comforting to know that people around the world finally have something to get angry about.
• This article was first published on Computing.
• For a compelling alternative view, try this blog from Chris Yiu.
September 2017: UK digital progress excellent, but SME support dying, warns techUK
Chris Middleton contrasts the positive vision of a new public sector report with some critical failings identified by the Civil Service itself.
TechUK, the organisation for Britain’s technology innovators, has rejoined the political fray with a new paper on how the government can deliver its vision of digital transformation. The report, Smarter Services, has been produced by the organisation’s Public Services Board.
Earlier this year, techUK published an excellent manifesto for post-Brexit digital renewal. What’s different this time around is that the group has drawn its findings from the machineries of government itself: a 2017 survey of 948 civil servants, including 200 at C-level or above.
So what does the public sector think of its own tech record? Although 97 per cent of workers see technology as an enabler or necessity, 57 per cent of respondents believe that a shortage of internal skills is an obstacle to those benefits being realised – a significant increase from last year. Critical skills gaps exist in digital service design, procurement, change management, and data management/analytics. The last two are key drivers towards organisational efficiency, says the report.
It adds: “Senior civil servants and those working in digital roles had more confidence that their department had the requisite skills and capabilities to deliver its business plan than their juniors did.
“When asked to rate their department’s expertise in four key areas (digital service design, data, procurement, and change management), on average 20 per cent more civil servants in digital roles agreed that their department had the skills necessary to deliver its business plan. This could signify that while government has had some success in attracting expertise to its Digital, Data and Technology profession, these skills have yet to permeate the wider civil service.”
The logjam extends beyond the perimeter of government, says techUK, with 79 per cent of civil servants believing that current systems and working practices prevent citizens from interacting with the government more online – as there is a strong appetite for doing, says the report. So what’s the solution?
For many in public service, the answer is sharing more information. Removing the barriers to collaboration is an important route to improving citizen services online, according to 93 per cent of respondents. Thirty-six per cent of civil servants think legislation prevents them from sharing more, while a further 36 per cent believe that incompatibilities in internal working practices are the root cause.
Nearly one-third of respondents think moving more citizen transactions online is either too complex a challenge (19 per cent) or too expensive (13 per cent). Among senior civil servants, this rises to 43 per cent, perhaps suggesting that the business case for digital services needs to be made higher up the departmental food chain.
But when it comes to the vision for technology, techUK believes that the government is making the right noises – if one sets aside Whitehall’s criticism of end-to-end encryption, a bedrock of digital trust. “To change and to do so at pace” was how the then Minister for the Cabinet office, Ben Gummer, set out the vision for the public sector in early 2017, when he announced the government’s transformation strategy.
The report says: “This is a laudable vision, and one that the government has already made great strides towards. The UK is ranked as the best digital government in the world by the UN, and the £450m increase in the Government Digital Service’s budget made in the 2015 spending review signals the government’s intention to build upon this solid foundation.”
So the UK is making excellent progress, says techUK, even if the centuries-old Whitehall machinery sometimes slips its gears in its effort to move at the speed of technology disruption. Indeed, a core function of the Civil Service is to prevent disruption as different administrations come and go: a factor that should never be ignored by tech strategists.
The UN’s positive assessment of the UK’s digital programme was supported earlier this year by the 2017 Global Innovation Index (GII), an annual report published by the World Intellectual Property Organisation (WIPO) and two business schools. It rated the UK the best in the world for its overall use of ICT, as well as top in both e-participation and digital government. However, that same report slammed the UK for, among other things, its poor investment in education and in upgrading the national infrastructure. These failings are two further impediments to Whitehall’s digital ambitions.
The techUK report admits that the “challenge remains great”: “The National Audit Office has warned that government has so far struggled to make a success of end-to-end transformation of the sort envisaged by the Transformation Strategy. The disruption of core public services caused by the recent WannaCry cyberattack highlighted that the public sector remains a disparate and often difficult environment for transformation to flourish, with governance, risk, and skills shortages significant barriers to be overcome.”
Of course, the ‘successful’ WannaCry attack was also indicative of a government that failed to listen to repeated warnings, and whose sweeping programme of cuts forced departments to bypass essential OS upgrades.
Nevertheless, the techUK report is upbeat and optimistic, especially when it comes to the UK’s capacity to innovate: “Fortunately, the UK also benefits from one of the most vibrant and thriving digital economies on the planet. UK-based businesses of all shapes and sizes are pushing boundaries, not only in terms of digital innovation, but also of large-scale business transformation and change management.
“This knowledge and experience should prove a valuable resource for the public sector, and industry stands ready to be constructive partners in the transformation journey. techUK has been working hard to bring public and private sectors together to address these issues.”
SMEs vs. ‘the oligopoly’
So what of the future? The organisation makes a number of recommendations for how the transformation strategy can be made to work. It says the government should:
• Increase its willingness to experiment with new working practices
• Develop channels to fund and account for cross-government work
• Create common standards and working practices across departments
• Offer three-year placements in industry for civil servants in technical roles
• Provide all Fast Stream workers with digital skills training, and:
• Use public sector procurement to help foster innovation in the supplier community.
The latter we can file under ‘ambitious’, as the government’s procurement practices are surely part of the problem as much as they may be the long-term solution. Despite the efforts of former Cabinet Office Minister, Frances Maude, to wean Whitehall off its ‘oligopoly’ habit – to use Maude’s own description of the problem – the government has consistently swung back to the enterprise giants, whenever it has nurtured the green shoots of cloud-native or SME alternatives.
A good comparison is the scene in Ridley Scott’s The Martian where a critical systems failure causes Matt Damon’s thriving crops to wither in the perishing alien atmosphere.
The report concurs with this assessment: “Despite the government setting a target to spend £1 in £3 of its procurement budget with smaller and medium-sized businesses by 2020, only 21 per cent of civil servants believe that there is an appetite within their department or organisation to increase the involvement of SMEs in the procurement chain. There has been a particularly large drop (13 per cent) in the proportion of respondents working in tech-facing roles who agreed with the statement.
“While only one in ten of those involved in procurement decisions agreed that their department or organisation had access to a wide range of suppliers, less than a quarter picked widening the supplier base as a priority. 24 per cent do not believe they need access to a diverse range of suppliers, down 12 per cent since 2016.”
So the tide seems to be turning against SMEs within the public sector: a cause for concern post-Brexit, when the UK will have no choice but to nurture its home-grown talent. Reversing that trend will demand real leadership, but the government may feel it has more important things to do.
This government and the preceding two administrations have been consistently criticised for too tight a focus on cost over value, since last decade’s catastrophic recession. The report notes that, within the Civil Service at least, that culture is changing:
“The drive to deliver better services is increasingly seen as the primary driver of transformation within government rather than cost savings. More than twice as many civil servants see IT as critical to improving service delivery (78 per cent) than view it as critical to making cost savings (34 per cent).”
Both the challenge and the opportunity, therefore, lie in taking that message to the policymakers.
August 2017: Is AI automating prejudice?
Chris Middleton reports on how the flaws in human society are already being replicated – often accidentally – by artificial intelligence.
• This article has been quoted in London’s Evening Standard newspaper.
AI is the new must-have differentiator for technology vendors and their customers. Yet the need to understand AI’s social impact is overwhelming, not least because most AI systems rely on human beings to train them. As a result, existing flaws and biases within our society risk being replicated – not in the code itself, necessarily, but in the training data that is supplied to some systems, and in the problems that they’re being asked to solve.
Without complete data, AI can never be truly impartial, they can only reflect or reproduce the conditions in which they are created, and the belief systems of their creators. This report will explain how and why, and share some real-world examples. The need to examine these issues is becoming increasingly urgent. As AI, machine learning, deep learning, and computer vision rise, buyers and sellers are rushing to include AI in everything, from enterprise CRM to national surveillance programmes and policing systems.
Are people with tattoos criminals?
One example of AI in national surveillance is the FBI’s bizarre scheme to record and analyse citizens’ tattoos, in order to predict if people with ink on their skin will commit crimes. Take a ‘Big Bang’ view of this project (rewind the clock to infer what the moment of creation must have been), and it’s clear that a subjective, non-scientific viewpoint (‘people with tattoos are criminals’) was adopted as the core principle of a national security system, and software was designed to prove it.
The code itself is probably clean, but the problem that the system is being asked to solve, and the data it is tasked with analysing, are surely flawed. Arguably, they betray the prejudices of the system’s commissioners. Why else would it have been conceived?
In such a febrile atmosphere, the twin problems of confirmation bias in research, and human prejudice in society, may become automated pandemics: AIs that can only tell people what they want to hear, because of how the system has been trained. Automated politics, with a veneer of evidenced fact.
Often this part of the design process will be invisible to the user, who will regard whatever results the system produces as being impartial. A recent AI white paper published by UK-RAS, the UK’s research organisation for robotics and AI, makes exactly this point: “Researchers saw how machine learning technology reproduces human bias, for better or for worse. [AI systems] reflect the links that humans have made themselves.”
That’s the view of the UK’s leading AI and robotics researchers. So, is AI automating prejudice and other societal problems? Or are these issues simply hypothetical?
The racist facial recognition system
The unfortunate fact is that they are already becoming real-world problems, in a significant minority of cases. Take the facial recognition system developed at MIT recently that was unable to identify African American women, because it was created and tested within a closed group of white males. The libraries for the system were distributed worldwide before an African American student at MIT exposed the fact that it could only identify white faces.
We know this story is true, because it was shared by Joichi Ito, head of MIT’s Media Lab, at the World Economic Forum 2017. He described his own students as “oddballs” – introverted white males working in small teams with few external reference points, he said.
The programmers weren’t consciously prejudiced, Ito explained, but it simply hadn’t occurred to them that their group lacked the diversity of the real world into which their system would be released.
As a result, a globally distributed AI was poorly trained and ended up discriminating against an entire ethnic group, which was invisible to the system. That the developers hadn’t anticipated this problem was their key mistake, but it was a massive one.
Male dominance and insularity are big problems for the tech industry: in the UK, just 17 per cent of people in science, technology, engineering, or maths (STEM) careers are women, while in the West the overwhelming majority of coders are young, white males.
The UK-RAS report shares a similar example of societal bias entering AI systems: “When an AI program became a juror in a beauty contest in September 2016, it eliminated most black candidates, as the data on which it had been trained to identify ‘beauty’ did not contain enough black-skinned people.” Again, the humans training the AI unconsciously weighted the data.
The lesson here is not that any given AI or line of code is inherently biased – although it might be – it’s that the data that populates AI systems may reflect local/social prejudices. At the same time, AI is seen as impartial, so any human bias risks becoming accepted as evidenced fact. Most AI is a so-called ‘black box’ solution (see below), making it hard for users to interrogate the system to see how or why a result was arrived at. In short, many AI systems are inscrutable.
The legal dimension
Why are these risks so important to consider? Evidence is mounting that such data problems may have begun to automate bias within our legal systems: a real challenge as law enforcement becomes increasingly augmented by machine intelligence in different parts of the world.
COMPAS is an algorithm that’s already being used in the US to assess whether defendants or convicts are likely to commit future crimes. The risk scores it generates are used in sentencing, bail, and parole decisions – just as credit scores are in the world of financial services. A recent article published on FiveThirtyEight.com set out the alleged problem with COMPAS:
“An analysis by ProPublica found that, when you examine the types of mistakes the system made, black defendants were almost twice as likely to be mislabeled as likely to reoffend – and potentially treated more harshly by the criminal justice system as a result. On the other hand, white defendants who committed a new crime in the two years after their COMPAS assessment were twice as likely as black defendants to have been mislabeled as low-risk.
“An even stickier question is whether the data being fed into these systems might reflect and reinforce societal inequality. For example, critics suggest that at least some of the data used by systems like COMPAS is fundamentally tainted by racial inequalities in the criminal justice system.” Again, this is a problem of flawed data being fed into an application that is seen by its users as impartial.
Tainted data in a networked system
The problem of tainted data runs deep in a networked society. Some months ago, a journalist colleague shared a story with Facebook friends of how he searched for images of teenagers to accompany an article on youth IT skills.
When he searched for “white teenagers”, he said, most of the results were library shots of happy, photogenic young people, but when he searched for “black teenagers”, he was shocked to see Google return a disproportionately high number of criminal/suspect mug shots.
(Author’s note: I verified these results at the time. The problem is still noticeable today, but far less overt, suggesting that Google has tweaked its algorithm.)
The underlying point is that, for decades, overall media coverage in the US, the UK, and elsewhere, has disproportionately focused on criminality within certain ethnic groups. This partial coverage populates the network, which in turn reinforces public perceptions: a vicious circle of confirmation bias feeding confirmation bias. This is why diversity programmes and positive messaging are important; it’s not about ‘political correctness’, as some allege; it’s about rebalancing a system before we replicate it in software.
This extraordinary article on Google search data reveals how prejudices run deep in human society. (Sample quote: “Overall, Americans searched for the phrase ‘kill Muslims’ with about the same frequency that they searched for ‘martini recipe’ and ‘migraine symptoms’.”)
Human bias can affect the data within AI systems at both linguistic and cultural levels, because – as we’ve seen – most AI still relies on being trained by human beings. To a computer looking at the world through camera eyes, a human is simply a collection of pixels. AI has no fundamental concept of what a person is, or what human society might be.
A computer has to be taught to recognise that a certain arrangement of pixels is a face, and that a different arrangement is the same thing. And it has to be taught by human beings what ‘beauty’ and ‘criminality’ are by feeding it the relevant data. The case studies above demonstrate that both these concepts are subjective and prone to human error, while legal systems throughout the world have radically different views on crime (as we will see below).
Our systems replicate our beliefs and personal values – including misconceptions or omissions – while coders themselves often prefer the binary world of computers to the messy, emotional world of humans. Again, MIT’s Ito made this observation of his own students.
The proof of Tay
Microsoft’s Tay chatbot disaster last year proved this point: a naïve robot, programmed by binary thinkers in a closed community. Tay was goaded by users into spouting offensive views within 24 hours of release, as the AI learned from the complex human world it found itself in. Humour and internet trolls weren’t part of its training: that’s an extraordinary omission for a chatbot let loose on a social network, and speaks volumes about the mindset of its programmers.
However, the cultural dimension of AI was demonstrated by another story in 2016: in China, Microsoft’s Xiaoice chatbot faced none of the problems that its counterpart did in the West: Chinese users behaved differently, and there were few reported attempts to subvert the application. Surely proof that AI is both modelled on, and shaped by, local human society. Its artificiality does not make it neutral.
These issues will become more and more relevant as law enforcement becomes increasingly automated. The cultural landscape and legal system surrounding a robot policeman in, say, Dubai is very different to that in Beijing or San Francisco.
The rise of robocop
In each of these three locations robots are already being trained and trialled by local police services: Pal Robotics’ Reem machines in Dubai (in public liaison/information roles); Knightscope K5s in the Bay Area (which patrol malls, recording suspicious activity); and Anbot riot-control bots in China.
There is no basis for assuming that future AI police officers or applications will implement a form of blank, globalised machine intelligence without bias or favour. It is more likely that they will reflect the cultures and legal systems of the countries in which they operate, just as human police do.
And the world’s legal systems are far from uniform. In Saudi Arabia, for example, to be an atheist is to be regarded as a terrorist, and women have far fewer rights than men. In Iran, homosexuality is punishable by death, as are offences such as apostasy (the abandonment of religious belief).
It’s easy to assume that, in the real world, no one would design AI systems to determine citizens’ private thoughts or sexual orientation, and yet here’s an example of AI being deployed to predict if people are gay or straight, a programme that the article describes as an “advance”. Note, too, how quickly this system has been developed within the current AI boom.
Now factor in robot police or AI applications enforcing laws in one culture that another might find abhorrent. The potential is clearly there for technology to be programmed to act against globally stated human rights.
In the US, the numbers of people shot by police are documented here by the Washington Post, while this report suggests that black Americans are three times more likely to be killed by officers than white Americans. Meanwhile, this article exposes the racial profiling that occurs in some sectors of US law enforcement, despite attempts to prevent it. In the UK, statistics reveal that force is more likely to used against black Londoners by police than against any other racial group. This is the messy human world that robots are entering – robots programmed by human beings.
Politicians are increasingly targeting minority groups, or removing legal protections from them. In the US alone, recent examples include the proposed US bans on people travelling from certain Muslim-majority countries, and on transgender people serving in the military, along with the proposed removal of legal protections for LGBTQ people and the scrapping of the Obama-era DACA scheme. Russia is among several other countries to turn against LGBTQ citizens.
So might any future robocop perpetuate the apparent biases in the US legal system, for example? As we’ve seen, that will depend on what training data has been put into the system, by whom, to what end, and based on what assumptions. The COMPAS case study above suggests that core data can be tainted at source by previous flaws and inequalities in the legal system.
The limits of AI
But let’s get back to the technology itself. The UK-RAS white paper acknowledges that AI has severe limitations at present, and that many users have “unrealistic expectations” of it. For example, the report says: “One limitation of AI is the lack of ‘common sense’; the ability to judge information beyond its acquired knowledge […] AI is also limited in terms of emotional intelligence.”
Then the researchers make a simple observation that everyone rushing to implement the technology should consider: “true and complete AI does not exist”, says the white paper, and there is “no evidence yet” that it will exist before 2050.
So it’s a sobering thought that AIs with no common sense and possible training bias, and which can’t understand human emotions, behaviour, or social contexts, are being tasked with trawling context-free data pulled from human society in order to expose criminals – as defined by politicians.
And yet that’s precisely what’s happening in US and UK national surveillance programmes.
Opening the ‘black box’
The UK-RAS white paper takes pains to set out both the opportunities and the risks of AI, which it describes as a transformative, trillion-dollar technology, the future of which extends into augmented intelligence and quantum computing.
On the one hand, the authors note: “[AI] applications can replace costly human labour and create new potential applications and work along with/for humans to achieve better service standards. […] It is certain that AI will play a major role in our future life. As the availability of information around us grows, humans will rely more and more on AI systems to live, to work, and to entertain. […] AI can achieve impressive results in recognising images or translating speech.”
But on the other, they add: “When the system has to deal with new situations when limited training data is available, the model often fails. […] Current AI systems are still missing [the human] level of abstraction and generalisability. […] Most current AI systems can be easily fooled, which is a problem that affects almost all machine learning techniques.
“Deep neural networks have millions of parameters, and so to understand why the network provides good or bad results becomes impossible. Trained models are often not interpretable. Consequently, most researchers use current AI approaches as a black box.”
That last quote is telling: researchers are saying that some AI systems are already so complex that even their designers can’t say how or why a decision has been made by the software.
Organisations should be wary of the black box’s potential to mislead and to be misled, along with its capacity to tell people what they already believe – for better, or for worse. Business and government should take these issues on board, and the systems they release into the wild must be transparent – as far back as the first principles that were adopted before the parameters were specified. More, the data that is being put into these systems should be open to interrogation, to ensure that AI systems are not being gamed to produce weighted results.
Users: question your data before you ask an AI to do it for you, and challenge your preconceptions.
• For more articles on robotics, AI, and automation, go to the Robotics Expert page.
• Further reading: How Google search data reveals the truth of who we are (Guardian).
• Further reading: Face-reading AI will be able to detect your politics, claims professor.