Chris’ Blog

Where technology and society meet: signal amidst the noise


• Chris’ blog archives can be found here (2016-17), here (2016), and here (2015).


21 October 2017: AI & automation, the real political cost

A new report reveals how each constituency in the UK will be hit by AI and automation, and says that the Midlands and the North will be worst affected. The government needs to act now to prevent serious problems later, says Chris Middleton.

Headlines shout about killer robots, conferences hail the rise of AI, robotics reports pile up on boardroom desks, and politicians talk about automation, but as yet no one has formulated an adequate policy response to maximising the opportunities and minimising the risks of these technologies – least of all the government’s own AI report.

So how to force MPs to abandon the rhetoric and engage with AI more urgently? One think tank has the answer: produce a heat-map of UK automation and show politicians how their own constituencies may be affected by it. The new report, The Impact of AI on UK Constituencies – Where Will Automation Hit Hardest? has been produced by Future Advocacy, a think tank that pushes for smart policy-making in the new industrial age.

Future Advocacy sees AI as the key that will unlock the mysteries of data and usher in the next wave of automation. The organisation has applied 2017 PwC data on the impact of automation to the local jobs market in each constituency of the UK: a simple but effective strategy.

Click to enlarge. Source: Future Advocacy.

The resulting heat-map (see picture) reveals that Shadow Chancellor John McDonnell’s Hayes and Harlington constituency will by the hardest hit by automation and machine intelligence, with nearly 40 per cent of jobs at risk by 2030. Its high concentration of transport and storage jobs make it most susceptible to disruption. (The long-term impact of driverless vehicles could be massive in the years ahead: in the US, for example, truck driving is the most common job in 29 out of 50 states.)

Next on the list are two Conservative seats, Crawley (37.8 per cent of jobs at risk) and North Warwickshire (37.1 per cent), followed by two Labour heartlands, Alyn and Deeside (36.8 per cent) and Brentford and Isleworth (36.8 per cent).

The constituencies of leading political figures are certainly in the frame, says the report. Among these are Maidenhead, held by Prime Minister Theresa May (where 28.8 per cent of jobs are at risk); Twickenham, held by LibDem leader Sir Vince Cable (27.2 per cent of jobs at risk); and Islington North, held by Labour leader Jeremy Corbyn (26.2 per cent of jobs).

While Corbyn’s constituency is a long way behind May’s and Cable’s in the number of jobs that could fall to the machines, the figures still represent well over a quarter of opportunities in the area, even though 602 other constituencies face a higher risk of unemployment. Clearly, AI and automation show no respect for political boundaries.

The big picture

Zoom out from local politics to look at the UK as a whole, and a stark picture emerges: the regions that are likely to be hardest hit by automation are the Midlands and the North of England, where jobs in transport, manufacturing, and warehousing are often concentrated.

As a result, ‘one-size fits all’ policies won’t work in this fast-emerging world, says Future Advocacy: “Our analysis suggests that the unequal geographical distribution of the impact of automation deserves immediate attention by government, particularly as it is regions that have previously suffered the effects of industrial decline that are likely to be worst hit.

“It is important that the government learns the lessons that the recent history of manufacturing, mining, and similar industries in the UK have taught us. The decline of these industries in parts of the UK towards the end of the last century may have been inevitable, but it is unarguable that the transition to new job types and different industries in these areas could have been managed better.

“The consequences of the historic decline of industries such as manufacturing and coal mining in these regions have been extensively studied, and include high rates of unemployment, high prevalence of illnesses such as depression and drug/alcohol abuse, and depopulation. It is concerning that the areas that have already suffered so much from industrial decline could be hardest hit yet again.

“Even more worryingly, the speed at which job displacement to automation could potentially occur is worth highlighting. For example, while it took several decades for the 19,000 mining jobs in the whole of Warwickshire to be lost, our analysis suggests that around 20,500 jobs (or 37.1% of the total number of jobs in 2015) in North Warwickshire could be displaced by the early 2030s. The impact on individuals, families, and whole communities will be profound.”

Lower-risk constituencies tend to be those that offer more opportunities in sectors such as education and health, says Future Advocacy – confirming findings published in the RSA’s recent Age of Automation report, which noted that jobs in hospitality, leisure, medicine, healthcare, and education are most resistant to automation, because they rely most on human relationships and empathy.

The other side of the coin

Yet automation isn’t a zero sum game, as the report acknowledges. Automation may sweep aside many routine, low-skilled tasks, but it will also create new types of job. In the long term, the economic growth spurred by these new technologies may mean that the employment impact is neutral, but long before then skills, education, and training will be the real battlegrounds as people fight to retain their place in society.

The report says: “For those with just GCSE-level education or lower, the estimated potential risk of automation is as high as 46 per cent in the UK, but this falls to only around 12 per cent for those with undergraduate degrees or higher. Similarly, men may be at higher risk of job displacement by automation than women.

“The sectors with the highest estimated risk of automation are characterised by relatively high proportions of male employees and of workers with low educational attainment.”

The risk of rising economic disparity, with wealth concentrated in fewer and fewer hands, is real, says the report. Earlier this year, right wing think tank Reform suggested that a quarter of a million jobs could be swept out of the UK public sector alone, leaving teachers, doctors, nurses and care workers to compete via reverse auction for ad hoc work – a scenario that Reform suggested would be a good thing.

To avoid an AI-enabled field day for the ideologues, Future Advocacy says that the government should conduct research into alternative income and taxation models that favour a fairer distribution of wealth.

“This could include undertaking well-designed trials of a Universal Basic Income (UBI) along the lines of those currently underway in Finland, Spain, the Netherlands, and Canada. The government’s fiscal and welfare policies must be updated to ensure that wealth is not increasingly concentrated in the hands of a few commercial entities who own robots and other automated technologies.

“Ultimately, we support a taxation model that results in a fairer distribution of the wealth that these technologies will create.”

Checks and balances

The need for an economic counterbalance to the rise of the machines seems overwhelming, as society prepares itself for a more skilled, creative, flexible, and/or portfolio-based future. But whether a Conservative government would ever consider a UBI strategy must be in doubt: it’s hard to imagine right-wing newspapers ever getting behind the idea, despite its obvious good sense.

So how else should the British government engage with automation today, so that it can plan for a better, fairer, and more equal future? Future Advocacy says that it should:

• Commission and support research to assess which employees are most at risk of job displacement, including how impacts will differ by employment sector, region, age, gender, educational attainment, and socio-economic group.

• Draft a White Paper on adapting the education system to lifelong learning and maximising the opportunities of AI – a recommendation also made by techUK in its 2017 manifesto for digital renewal, by the RSA in its automation report, and by Jeremy Corbyn in his speech to the 2017 Labour conference.

Such a White Paper shouldn’t restrict itself to extolling the importance of STEM and coding skills, adds Future Advocacy, but also make detailed proposals to future-proof training in creative and interpersonal skills. More, it should support initiatives that encourage underrepresented groups to pursue AI and robotics training, including women and ethnic minorities.

• Make the AI opportunity a central pillar of the UK’s Industrial strategy and of the trade deals that the UK negotiates post-Brexit – a recommendation also made by techUK.

• Ensure that the migration policy in place post-Brexit will still allow UK-based companies and universities to attract the best AI and robotics talent from all over the world.

• Develop smart, targeted strategies to address future job displacement.

The report adds: “The importance of targeting these interventions at those most at risk cannot be overemphasised.”

Conclusions

Together, this report, the RSA’s Age of Automation document, and techUK’s election-themed manifesto for digital renewal are better assessments of the new world of work than the government’s much-feted AI report, published last Sunday while ministers were kneeling to pray that Brexit works.

That the government needs to look at these figures is beyond any doubt. No 10 needs to stop obsessing about what sort of society it doesn’t want (one overseen by Europe) and start thinking about what kind of economy it wants to create in the long term: a future in which it will need to be ambitious, entrepreneurial, and prepared to take risks.

If everyone in society is to benefit from the new world of work – which was surely the subtext of Brexit – then Whitehall needs to give this report urgent consideration.

.chrism

Further reading: Government AI Report: Academic, Disconnected, Disappointing (diginomica)

CMLogoSMALLEnquiries
07986 009109
chris@chrismiddleton.company

RSS
Follow by Email
Facebook
Google+
http://chrismiddleton.company/chris-blog
SHARE
Pinterest
Pinterest
LinkedIn

2 October 2017: Does Jeremy Corbyn really want a robot tax?

Photo: Getty Images

Misreported by the mainstream media yet again, the Labour leader backs research saying that the UK must invest more in training its workers for an automated future, says Chris Middleton.

On 27 September, Labour leader Jeremy Corbyn used his speech to the party’s 2017 conference in Brighton to call for a new industrial and education strategy to face “the challenges of the future [that] go beyond the need to turn our backs on an economic model that has failed to invest in and upgrade our economy”.

Further education, life-long learning, and a rebranded education system are part of Corbyn’s new deal. He said: “Labour will build an education and training system from the cradle to the grave that empowers people. Not one that shackles them with debt. That’s why we will establish a National Education Service, which will include at its core free tuition for all college courses, technical, and vocational training so that no one is held back by costs and everyone has the chance to learn.”

Corbyn suggested that state-owned investment banks would be the best way to push funding to every corner of the UK. With several retail and investment banks threatening to leave the UK in the run-up to Brexit, the idea may be timelier than his critics realise.

At least one international study supports the need for the UK to invest more from the centre. The Global Innovation Index (GII), published annually by WIPO et al, backs Corbyn’s view that the UK is investing too little in both its education system and its central infrastructure. The UK is a long way down the rankings in several key areas, says the report: 102nd out of 127 countries in capital formation around critical infrastructure programmes; 22nd for education quality; 25th for education expenditure as a percentage of GDP, and just 46th in tertiary enrolment.

Rise of the robots

Corbyn then turned his attention to automation, robotics, and AI, which the government has identified as being among the ‘eight great technologies’ that are critical to economic prosperity. He said: “The tide of automation and technological change means re-training and management of the workforce must be centre-stage in the coming years.

“We need urgently to face the challenge of automation: robotics that could make so much of contemporary work redundant. That is a threat in the hands of the greedy, but it’s a huge opportunity if it’s managed in the interests of society as a whole.

“We won’t reap the full rewards of these great technological advances if they’re monopolised to pile up profits for a few. But if they’re publicly managed – to share the benefits – they can be the gateway for a new settlement between work and leisure. A springboard for expanded creativity and culture.”

The phrase ‘publicly managed’ appears to refer to his proposed reset of the UK’s banking and investment systems: ethical investment, in other words, and a new form of mutually beneficial capitalism.

Far from being an old red rag for conservative bulls to charge at, the policy echoes the findings of several progressive reports, such as the RSA’s recent The Age of Automation, which says that the burden of tax must shift from labour to capital in order to counter any socially destructive impacts of mass automation. Other robotics and AI studies have made similar recommendations.

Indeed, this is the flip side of the coin tossed by Reform earlier this year. The right-wing think tank – favoured by Theresa May and health secretary Jeremy Hunt – suggested that mass automation would allow the public sector alone to shed 250,000 workers and force doctors, teachers, and nurses to compete for gig economy work via reverse auction (bid to work for less money).

So it is hard to argue with Corbyn’s claim that automation may be used to maximise benefits for the few when that scenario is spelled out in conservative analysts’ own reports, which even suggest that it would be a good thing.

And yet the papers do argue with the Labour leader, as is traditional in the UK’s politicised media landscape. “Return of the Luddites!”, shouted the Telegraph, ignoring Corbyn’s clearly stated support for applying automation in a way that benefits all of society. The implication of the Telegraph’s headline is perhaps that modernity demands that only the few should benefit.

En masse, the UK’s conservative press pushed the message that Corbyn is calling for a ‘robot tax’ to penalise companies that automate their workforces; indeed, it was suggested that Labour’s pre-speech briefings spun this line to the media. That may be true, but what Corbyn actually said on tax was very different:

“When I’ve met business groups, I’ve been frank that we will invest in the education and skills of the workforce and we will invest in better infrastructure from energy to digital, but we are going to ask big business to pay a bit more tax.”

So there are three explanations for the media claim that Corbyn is calling for a robot tax. One, he is, but neglected to say so in a detailed, 6,000-word speech; two, Labour’s own spin doctors don’t understand the distinction between a robot tax and raising corporation tax to pay for life-long learning; and three, the conservative press have conflated two unrelated concepts in an attempt to alarm business leaders. Any of these may be true.

But whichever is the case, the difference between raising corporation tax, and taxing a robot as if it were an employee – an idea supported by Bill Gates, the European Union, and others – should be obvious. One shifts the onus onto the robot’s manufacturer, and the other onto their business customers. Conflating the two to suggest that Corbyn sees robots as the enemy is both unsupported by the text of his speech, and unhelpful to this debate.

Business leaders respond

In the event, the UK’s business leaders responded better to Corbyn’s speech than the Tory press did, which suggests that Corbyn is ringing bells in some City towers, at least. The CBI said that it, “shares much of Labour’s vision for a fairer society underpinned by good business. But without an open dialogue there is a risk that some of their policies could knock us into reverse gear.

“The focus on research & development, infrastructure and education is encouraging, but artificially hiking wages and changing corporation tax could be investment dampeners, not drivers. Labour is certainly laying out a new way forward and we urge them to iron out inconsistent messages – especially the relationship between state and industry – and clarify policies that are sometimes hard to see delivered and paid for.

“Labour can be reassured that they do have a significant joint agenda with businesses of all sizes, which stand ready to ensure that the opposition’s ideas deliver sustainable prosperity across the UK, and avoid the threat of an era of regressive industrial strategy.”

So what did the tech industry think of Corbyn’s speech?

Earlier this year, techUK, the organisation for the UK’s IT innovators, called for a new industrial strategy in its own manifesto for change, along with a programme of life-long learning. On the face of it, therefore, Corbyn is simply backing their ideas. But in its own response to the Labour leader’s speech, techUK was oddly ambivalent.

Deputy CEO Antony Walker said: “Jeremy Corbyn is right that there are huge benefits to society that can come through automation. From AI that can help improve diagnosis rates in the NHS to machine learning that can reduce time wasted on form filling in businesses. But if the UK wants to lead the way in harnessing this power we must be careful not to undermine the investment in digital technologies that will drive productivity and economic growth. [In fact, Corbyn simply said he wants broader, more ethical investment.]

“All political parties should be thinking about how we handle the challenges to come from accelerated automation. But it is too soon to be making assumptions about its impact on either jobs or the tax base. Care needs to be taken not to put a tax on productivity growth that is so fundamental to raising living standards.

“The challenges posed by automation cannot be solved by a short-term fix. Automation can and will lead to the creation of new jobs and industries. What is important is that UK workers have the skills and education needed to take advantage of those opportunities. That is why Labour is right to highlight the importance of improving investment in education and lifelong learning. This approach must take priority over-relying on taxation to slow the pace of change.”

In other words: “Nice sentiment, Jeremy, but don’t ask us to pay more tax,” says UK IT.

Reverse Reaganomics

But are Corbyn’s critics right to say that raising corporation tax deters growth and innovation? Not according to the source of the definitive ‘tax cuts equal growth’ policy. “In reality, there’s no evidence that a tax cut now would spur growth,” says Bruce Bartlett, Republican domestic policy adviser to President Ronald Reagan in the 1980s.

In an opinion piece for the Washington Post, Bartlett now describes his own policy as “wishful thinking” and a “Republican tax myth”, and explains that much stronger growth followed President Clinton’s tax increases in the 1990s, and President Obama’s this century, than occurred during lower tax regimes.

But back in the UK, one thing is certain: Whitehall must improve education and infrastructure spending in global terms: no statistics say otherwise. The UK also needs to triple its R&D investments to remain competitive, post Brexit.

Meanwhile, if business is to be the sole beneficiary of both mass automation and a skilled workforce – while the gig economy leaves everyone else scrabbling for micro-payments – then raising corporation tax to help better educate the populace and give them a lifetime of opportunities is a sensible and just idea.

The alternative is raising taxes for everyone, including those who will be hardest hit by the ongoing application of 19th Century industrial thinking to 21st Century technology. Far from presenting an outdated vision, Corbyn is saying that the old strategy is no longer fit for purpose.

As I’ve said before, the core challenge in robotics and automation is actually very simple: what developers believe they are creating (assistive technologies for a better society, aka man plus machine), and what most customers think they are buying (a means to slash costs, remove workers, and force up productivity, aka man vs machine), are two completely different things. More, Whitehall and the Bank of England appear to see the benefits as coming from the latter camp, as my report for diginomica from UK Robotics Week 2017 revealed.

And despite the tendency of the digital world to enable flat, peer-to-peer, collaborative processes, the UK’s policymakers are trying to shoehorn the technologies into a top-down industrial strategy that was forged in an era of cotton mills and workhouses. Far from following that trend, Jeremy Corbyn is trying to break it. That’s a forward-looking policy, not a Luddite view.

But in the long run, he may have nothing to worry about when it comes to British workers being swept aside by an unholy alliance of machines and men in stove pipe hats: the UK is investing a pitiful amount of money in kickstarting the sector domestically, despite having identified it as being critical to future prosperity.

With a total central investment of just £300 million between 2016 and 2020, the UK is nowhere in global terms; Japan, for example, is investing $161 billion to create a “super-smart society”, while China is automating faster than any other country. Britain’s uptake of the technologies lags a long way behind most Western economies, such as Germany, France, and Sweden.

And that’s not all: according to the Science and Technology Select Committee (quoted in the RSA’s automation report), 80 per cent of the UK’s investment in robotics comes directly from the EU. With no strategy in place to replace that funding, the UK’s chance to be a world leader may already have gone.

.chrism

• This article was first published on diginomica.

CMLogoSMALLEnquiries
07986 009109
chris@chrismiddleton.company

RSS
Follow by Email
Facebook
Google+
http://chrismiddleton.company/chris-blog
SHARE
Pinterest
Pinterest
LinkedIn

25 September 2017: What’s Uber really driving at?

Agent of change – or lord of misrule?

Chris Middleton zeroes in on the real story behind the headlines and popular protests about Uber’s lost licence to trade in London.

On 22 September, Transport for London (TfL) refused to renew Uber’s private hire licence on the streets of the capital, saying that it had taken the decision on the grounds of “public safety and security implications”.

What “security implications” referred to is unclear, but the move was merely the latest in a series of legal tussles for Uber, which has battled authorities in cities throughout the world. Many of these have centred on a clash between what is billed as a ride-sharing app, but is used by millions as a taxi service with non-vetted (if user-reviewed) drivers.

Arguably, among Uber’s main achievements are the sidestepping of regulations that are designed to protect the public, while doing so with customer support and avoiding EU sales tax. That’s not a situation that authorities were going to rubber-stamp forever.

In Paris in 2014 and 2015, taxi company protests and attacks on Uber drivers led to the UberPop app being suspended there, but in many other cities, such as New York, the service remains popular.

TfL also cited Uber’s use of its Greyball technology, which has helped the company to evade investigators, according to a New York Times report earlier this year.

Yes we Khan?

Mayor of London Sadiq Khan said in a statement: “I fully support TfL’s decision. It would be wrong if TfL continued to license Uber if there is any way that this could pose a threat to Londoners’ safety and security.” General secretary of the Licensed Taxi Drivers’ Association, Steve McNamara added: “This immoral company has no place on London’s streets.”

Some have criticised Khan for a retrograde step, but he had little choice but to back his transport chiefs. Khan’s role shouldn’t include rubber-stamping the wishes of a US corporation, however popular its services may be with citizens. However, some have argued that TfL’s decision was itself political: the result of a concerted campaign by the Licensed Taxi Drivers’ Association, supported by – among others – Nigel Farage.

In London, Uber is used by up to 3.5 million passengers or registered users, the company claims, while the apps provide on-demand, flexible employment for 40,000 drivers, it says. However, Uber’s figures may need greater scrutiny, as 3.5 million is nearly half of the resident population of the capital.

The effective ban comes at a challenging time for both the capital and Brexit UK, as the country struggles to attract inward investment, maintain consumer confidence, and ready itself for the days when increasing numbers of citizens will rely on the gig economy to supplement their income as automation increases.

The company said it will appeal against TfL’s “Luddite decision”, and added, “Far from being open, London is closed to innovative companies”.

However, while popular in 600 cities worldwide (according to Uber’s own estimates), the company’s main innovation is mobilising other people’s assets to build a market presence that will ultimately dispense of their services.

Uber is among the many companies, such as Waymo (currently suing Uber for patent infringement and misappropriation of Google assets), Tesla, Ford, Mercedes, and GM, to be developing autonomous vehicle technologies. Uber’s long-term aim is almost certainly to build the infrastructure to summon driverless cars with a click. Much of that infrastructure exists already.

Any public transport service that’s based on escalating private car ownership in cities can’t be held up as a vision of the future. So accusing its opponents of being Luddites is wide of the mark – and Uber knows it. Its drivers are a medium-term means to an automated end.

Despite this, Uber’s general manager in London, Tom Elvidge, told the BBC that he was putting the company’s 40,000 drivers front and centre of the protest: “To defend the livelihoods of all those drivers, and the consumer choice of millions of Londoners who use our app, we intend to immediately challenge this in the courts.”

The gig conundrum

Many of those drivers rely on Uber for ad hoc, gig-economy income, but not exclusively; it’s not the case that all are now out of a job. Indeed, Uber’s insistence that they are is interesting: up until now, it has always claimed its drivers are self-employed.

A more persuasive argument in Uber’s favour is that it sweeps aside a bias that some have observed, anecdotally, in London’s taxi trade: a bias against ethnic minorities, both as drivers and passengers. And Uber will take you south of the river.

That said, some Uber workers have taken to social media themselves to criticise the company for, among other things, flooding the market with drivers to force costs down, and taking up to 30 per cent commissions. But at present, the howls of protest at TfL’s move are louder than Uber’s critics, which has weakened Khan’s standing in the capital.

Uber is seen as synonymous with the gig economy, so it’s worth considering its effect. Uber is forcing down the cost of services to end-users – not just within its own business, but also in its sector. Customers love it, just as they love free music and movies, but (understandably) they ignore the long-term repercussions:

While, over time, the opportunities to earn money from the gig economy increase – a good thing – the ability to earn a living from them dissipates. In this sense, the gig economy is emerging as a broad-spectrum scrabble for micro-payments, while automated businesses, such as Uber, rake in escalating profits. Welcome to the new 19th Century.

The music industry is the classic example. The network connects a musician with listeners worldwide – another good thing – while stuffing the channel with noisy middlemen and advertisers, leaving our globally-connected artist with an income of cents, if she’s lucky. This is because, like on-demand transport and information, music has become commoditised via the network effect. Any musician can earn more from busking for one hour in a small town centre than she can from 100,000 streams in the global village, Spotify.

Some policymakers are well aware of this phenomenon. For example, right wing thinktank Reform suggested earlier this year that mass automation in the UK public sector should be backed by gig-economy workers competing by reverse auction for ad hoc bookings (bidding to work for less and less money). In this case, the workers in Reform’s sights were teachers, doctors, and nurses. In this sense, the gig economy could be seen as a wholesale transfer of income and services to the many, but profits and power to the automated few. That sits uncomfortably in a peer-to-peer world.

To the future, then

Which brings us back to Uber. Whether TfL’s move is enforceable in the virtual world remains to be seen: some previous examples of bans or licence rejections have seen drivers continue to pick up work. Uber itself has long favoured a Wild West approach to business whenever it rides into town.

The challenge for legislators is that everyone loves a cowboy. TfL can refuse a licence, but it can’t mass-delete a popular app – unless app stores cease to approve it. Many Uber users now see the service as a consumer right, and as the most convenient, safest, low-cost transport solution in London – even if TfL believes their safety to be low on Uber’s list of priorities.

But if Uber is sincere in its determination to build a collaborative, gig-economy, on-demand future – rather than seizing market share at any cost – then it must learn how to collaborate itself. 

Uber CEO Dara Khosrowshahi acknowledged this when he admitted, “The truth is that there is a high cost to a bad reputation”. He said that he would work towards collaborating with authorities to make the company better: a welcome change of tone.

Put another way, a great app, a disruptive technology, and a popular service don’t excuse being a bad corporate citizen. Whether serial law-breaker or popular disruptor, it’s time for Uber to grow up. To criticise it isn’t to be a Luddite. The truth is, perhaps, much simpler: it had a licence and blew it.

But at the heart of this story is a good idea and a popular service. So let’s hope that a better version comes along soon, either via an improved, more contrite Uber, a competitor, or Wired’s suggestion of a new employee-owned service.

A Sky Data poll shows that more than half of people in the UK support Uber operating in the country. Meanwhile, the ride-share company has launched an online petition at Change.org, urging users to help save the service in London. At the time of writing, it has over 700,000 supporters. However, a random sample of responses to the petition reveals that a surprising number are signing from outside the UK – according to their stated locations – while the media is portraying it as a London protest. In an era of climate change, food banks, Trump, and nuclear Armageddon, it’s comforting to know that people around the world finally have something to get angry about. 

• This article was first published on Computing.
• For a compelling alternative view, try this blog from Chris Yiu.

.chrism

CMLogoSMALLEnquiries
07986 009109
chris@chrismiddleton.company

RSS
Follow by Email
Facebook
Google+
http://chrismiddleton.company/chris-blog
SHARE
Pinterest
Pinterest
LinkedIn

September 2017: UK digital progress excellent, but SME support dying, warns techUK

Chris Middleton contrasts the positive vision of a new public sector report with some critical failings identified by the Civil Service itself.

Goodbye SMEs in government?

TechUK, the organisation for Britain’s technology innovators, has rejoined the political fray with a new paper on how the government can deliver its vision of digital transformation. The report, Smarter Services, has been produced by the organisation’s Public Services Board.

Earlier this year, techUK published an excellent manifesto for post-Brexit digital renewal. What’s different this time around is that the group has drawn its findings from the machineries of government itself: a 2017 survey of 948 civil servants, including 200 at C-level or above.

So what does the public sector think of its own tech record? Although 97 per cent of workers see technology as an enabler or necessity, 57 per cent of respondents believe that a shortage of internal skills is an obstacle to those benefits being realised – a significant increase from last year. Critical skills gaps exist in digital service design, procurement, change management, and data management/analytics. The last two are key drivers towards organisational efficiency, says the report.

It adds: “Senior civil servants and those working in digital roles had more confidence that their department had the requisite skills and capabilities to deliver its business plan than their juniors did.

“When asked to rate their department’s expertise in four key areas (digital service design, data, procurement, and change management), on average 20 per cent more civil servants in digital roles agreed that their department had the skills necessary to deliver its business plan. This could signify that while government has had some success in attracting expertise to its Digital, Data and Technology profession, these skills have yet to permeate the wider civil service.”

The logjam extends beyond the perimeter of government, says techUK, with 79 per cent of civil servants believing that current systems and working practices prevent citizens from interacting with the government more online – as there is a strong appetite for doing, says the report. So what’s the solution?

For many in public service, the answer is sharing more information. Removing the barriers to collaboration is an important route to improving citizen services online, according to 93 per cent of respondents. Thirty-six per cent of civil servants think legislation prevents them from sharing more, while a further 36 per cent believe that incompatibilities in internal working practices are the root cause.

Nearly one-third of respondents think moving more citizen transactions online is either too complex a challenge (19 per cent) or too expensive (13 per cent). Among senior civil servants, this rises to 43 per cent, perhaps suggesting that the business case for digital services needs to be made higher up the departmental food chain.

But when it comes to the vision for technology, techUK believes that the government is making the right noises – if one sets aside Whitehall’s criticism of end-to-end encryption, a bedrock of digital trust. “To change and to do so at pace” was how the then Minister for the Cabinet office, Ben Gummer, set out the vision for the public sector in early 2017, when he announced the government’s transformation strategy.

The report says: “This is a laudable vision, and one that the government has already made great strides towards. The UK is ranked as the best digital government in the world by the UN, and the £450m increase in the Government Digital Service’s budget made in the 2015 spending review signals the government’s intention to build upon this solid foundation.”

Slipping gears 

So the UK is making excellent progress, says techUK, even if the centuries-old Whitehall machinery sometimes slips its gears in its effort to move at the speed of technology disruption. Indeed, a core function of the Civil Service is to prevent disruption as different administrations come and go: a factor that should never be ignored by tech strategists.

The UN’s positive assessment of the UK’s digital programme was supported earlier this year by the 2017 Global Innovation Index (GII), an annual report published by the World Intellectual Property Organisation (WIPO) and two business schools. It rated the UK the best in the world for its overall use of ICT, as well as top in both e-participation and digital government. However, that same report slammed the UK for, among other things, its poor investment in education and in upgrading the national infrastructure. These failings are two further impediments to Whitehall’s digital ambitions.

The techUK report admits that the “challenge remains great”: “The National Audit Office has warned that government has so far struggled to make a success of end-to-end transformation of the sort envisaged by the Transformation Strategy. The disruption of core public services caused by the recent WannaCry cyberattack highlighted that the public sector remains a disparate and often difficult environment for transformation to flourish, with governance, risk, and skills shortages significant barriers to be overcome.”

Of course, the ‘successful’ WannaCry attack was also indicative of a government that failed to listen to repeated warnings, and whose sweeping programme of cuts forced departments to bypass essential OS upgrades.

Nevertheless, the techUK report is upbeat and optimistic, especially when it comes to the UK’s capacity to innovate: “Fortunately, the UK also benefits from one of the most vibrant and thriving digital economies on the planet. UK-based businesses of all shapes and sizes are pushing boundaries, not only in terms of digital innovation, but also of large-scale business transformation and change management.

“This knowledge and experience should prove a valuable resource for the public sector, and industry stands ready to be constructive partners in the transformation journey. techUK has been working hard to bring public and private sectors together to address these issues.”

SMEs vs. ‘the oligopoly’

So what of the future? The organisation makes a number of recommendations for how the transformation strategy can be made to work. It says the government should:

• Increase its willingness to experiment with new working practices
• Develop channels to fund and account for cross-government work
• Create common standards and working practices across departments
• Offer three-year placements in industry for civil servants in technical roles
• Provide all Fast Stream workers with digital skills training, and:
• Use public sector procurement to help foster innovation in the supplier community.

The latter we can file under ‘ambitious’, as the government’s procurement practices are surely part of the problem as much as they may be the long-term solution. Despite the efforts of former Cabinet Office Minister, Frances Maude, to wean Whitehall off its ‘oligopoly’ habit – to use Maude’s own description of the problem – the government has consistently swung back to the enterprise giants, whenever it has nurtured the green shoots of cloud-native or SME alternatives.

A good comparison is the scene in Ridley Scott’s The Martian where a critical systems failure causes Matt Damon’s thriving crops to wither in the perishing alien atmosphere.

The report concurs with this assessment: “Despite the government setting a target to spend £1 in £3 of its procurement budget with smaller and medium-sized businesses by 2020, only 21 per cent of civil servants believe that there is an appetite within their department or organisation to increase the involvement of SMEs in the procurement chain. There has been a particularly large drop (13 per cent) in the proportion of respondents working in tech-facing roles who agreed with the statement.

“While only one in ten of those involved in procurement decisions agreed that their department or organisation had access to a wide range of suppliers, less than a quarter picked widening the supplier base as a priority. 24 per cent do not believe they need access to a diverse range of suppliers, down 12 per cent since 2016.”

So the tide seems to be turning against SMEs within the public sector: a cause for concern post-Brexit, when the UK will have no choice but to nurture its home-grown talent. Reversing that trend will demand real leadership, but the government may feel it has more important things to do.

Conclusions

This government and the preceding two administrations have been consistently criticised for too tight a focus on cost over value, since last decade’s catastrophic recession. The report notes that, within the Civil Service at least, that culture is changing:

“The drive to deliver better services is increasingly seen as the primary driver of transformation within government rather than cost savings. More than twice as many civil servants see IT as critical to improving service delivery (78 per cent) than view it as critical to making cost savings (34 per cent).”

Both the challenge and the opportunity, therefore, lie in taking that message to the policymakers.

.chrism

CMLogoSMALLEnquiries
07986 009109
chris@chrismiddleton.company

RSS
Follow by Email
Facebook
Google+
http://chrismiddleton.company/chris-blog
SHARE
Pinterest
Pinterest
LinkedIn

August 2017: Is AI automating prejudice?

Chris Middleton reports on how the flaws in human society are already being replicated – often accidentally – by artificial intelligence.

• This article has been quoted in London’s Evening Standard newspaper.

AI is the new must-have differentiator for technology vendors and their customers. Yet the need to understand AI’s social impact is overwhelming, not least because most AI systems rely on human beings to train them. As a result, existing flaws and biases within our society risk being replicated – not in the code itself, necessarily, but in the training data that is supplied to some systems, and in the problems that they’re being asked to solve.

Without complete data, AI can never be truly impartial, they can only reflect or reproduce the conditions in which they are created, and the belief systems of their creators. This report will explain how and why, and share some real-world examples. The need to examine these issues is becoming increasingly urgent. As AI, machine learning, deep learning, and computer vision rise, buyers and sellers are rushing to include AI in everything, from enterprise CRM to national surveillance programmes and policing systems.

Are people with tattoos criminals?

One example of AI in national surveillance is the FBI’s bizarre scheme to record and analyse citizens’ tattoos, in order to predict if people with ink on their skin will commit crimes. Take a ‘Big Bang’ view of this project (rewind the clock to infer what the moment of creation must have been), and it’s clear that a subjective, non-scientific viewpoint (‘people with tattoos are criminals’) was adopted as the core principle of a national security system, and software was designed to prove it.

The code itself is probably clean, but the problem that the system is being asked to solve, and the data it is tasked with analysing, are surely flawed. Arguably, they betray the prejudices of the system’s commissioners. Why else would it have been conceived?

In such a febrile atmosphere, the twin problems of confirmation bias in research, and human prejudice in society, may become automated pandemics: AIs that can only tell people what they want to hear, because of how the system has been trained. Automated politics, with a veneer of evidenced fact.

Often this part of the design process will be invisible to the user, who will regard whatever results the system produces as being impartial. A recent AI white paper published by UK-RAS, the UK’s research organisation for robotics and AI, makes exactly this point: “Researchers saw how machine learning technology reproduces human bias, for better or for worse. [AI systems] reflect the links that humans have made themselves.”

That’s the view of the UK’s leading AI and robotics researchers. So, is AI automating prejudice and other societal problems? Or are these issues simply hypothetical?

The racist facial recognition system

The unfortunate fact is that they are already becoming real-world problems, in a significant minority of cases. Take the facial recognition system developed at MIT recently that was unable to identify African American women, because it was created and tested within a closed group of white males. The libraries for the system were distributed worldwide before an African American student at MIT exposed the fact that it could only identify white faces.

We know this story is true, because it was shared by Joichi Ito, head of MIT’s Media Lab, at the World Economic Forum 2017. He described his own students as “oddballs” – introverted white males working in small teams with few external reference points, he said.

The programmers weren’t consciously prejudiced, Ito explained, but it simply hadn’t occurred to them that their group lacked the diversity of the real world into which their system would be released.

As a result, a globally distributed AI was poorly trained and ended up discriminating against an entire ethnic group, which was invisible to the system. That the developers hadn’t anticipated this problem was their key mistake, but it was a massive one.

Male dominance and insularity are big problems for the tech industry: in the UK, just 17 per cent of people in science, technology, engineering, or maths (STEM) careers are women, while in the West the overwhelming majority of coders are young, white males.

The UK-RAS report shares a similar example of societal bias entering AI systems: “When an AI program became a juror in a beauty contest in September 2016, it eliminated most black candidates, as the data on which it had been trained to identify ‘beauty’ did not contain enough black-skinned people.” Again, the humans training the AI unconsciously weighted the data.

The lesson here is not that any given AI or line of code is inherently biased – although it might be – it’s that the data that populates AI systems may reflect local/social prejudices. At the same time, AI is seen as impartial, so any human bias risks becoming accepted as evidenced fact. Most AI is a so-called ‘black box’ solution (see below), making it hard for users to interrogate the system to see how or why a result was arrived at. In short, many AI systems are inscrutable.

The legal dimension

Why are these risks so important to consider? Evidence is mounting that such data problems may have begun to automate bias within our legal systems: a real challenge as law enforcement becomes increasingly augmented by machine intelligence in different parts of the world.

COMPAS is an algorithm that’s already being used in the US to assess whether defendants or convicts are likely to commit future crimes. The risk scores it generates are used in sentencing, bail, and parole decisions – just as credit scores are in the world of financial services. A recent article published on FiveThirtyEight.com set out the alleged problem with COMPAS:

“An analysis by ProPublica found that, when you examine the types of mistakes the system made, black defendants were almost twice as likely to be mislabeled as likely to reoffend – and potentially treated more harshly by the criminal justice system as a result. On the other hand, white defendants who committed a new crime in the two years after their COMPAS assessment were twice as likely as black defendants to have been mislabeled as low-risk.

“An even stickier question is whether the data being fed into these systems might reflect and reinforce societal inequality. For example, critics suggest that at least some of the data used by systems like COMPAS is fundamentally tainted by racial inequalities in the criminal justice system.” Again, this is a problem of flawed data being fed into an application that is seen by its users as impartial.

Tainted data in a networked system

The problem of tainted data runs deep in a networked society. Some months ago, a journalist colleague shared a story with Facebook friends of how he searched for images of teenagers to accompany an article on youth IT skills.

When he searched for “white teenagers”, he said, most of the results were library shots of happy, photogenic young people, but when he searched for “black teenagers”, he was shocked to see Google return a disproportionately high number of criminal/suspect mug shots.

(Author’s note: I verified these results at the time. The problem is still noticeable today, but far less overt, suggesting that Google has tweaked its algorithm.)

The underlying point is that, for decades, overall media coverage in the US, the UK, and elsewhere, has disproportionately focused on criminality within certain ethnic groups. This partial coverage populates the network, which in turn reinforces public perceptions: a vicious circle of confirmation bias feeding confirmation bias. This is why diversity programmes and positive messaging are important; it’s not about ‘political correctness’, as some allege; it’s about rebalancing a system before we replicate it in software.

This extraordinary article on Google search data reveals how prejudices run deep in human society. (Sample quote: “Overall, Americans searched for the phrase ‘kill Muslims’ with about the same frequency that they searched for ‘martini recipe’ and ‘migraine symptoms’.”)

Human bias can affect the data within AI systems at both linguistic and cultural levels, because – as we’ve seen – most AI still relies on being trained by human beings. To a computer looking at the world through camera eyes, a human is simply a collection of pixels. AI has no fundamental concept of what a person is, or what human society might be.

A computer has to be taught to recognise that a certain arrangement of pixels is a face, and that a different arrangement is the same thing. And it has to be taught by human beings what ‘beauty’ and ‘criminality’ are by feeding it the relevant data. The case studies above demonstrate that both these concepts are subjective and prone to human error, while legal systems throughout the world have radically different views on crime (as we will see below).

Our systems replicate our beliefs and personal values – including misconceptions or omissions – while coders themselves often prefer the binary world of computers to the messy, emotional world of humans. Again, MIT’s Ito made this observation of his own students.

The proof of Tay

Microsoft’s Tay chatbot disaster last year proved this point: a naïve robot, programmed by binary thinkers in a closed community. Tay was goaded by users into spouting offensive views within 24 hours of release, as the AI learned from the complex human world it found itself in. Humour and internet trolls weren’t part of its training: that’s an extraordinary omission for a chatbot let loose on a social network, and speaks volumes about the mindset of its programmers.

However, the cultural dimension of AI was demonstrated by another story in 2016: in China, Microsoft’s Xiaoice chatbot faced none of the problems that its counterpart did in the West: Chinese users behaved differently, and there were few reported attempts to subvert the application. Surely proof that AI is both modelled on, and shaped by, local human society. Its artificiality does not make it neutral.

These issues will become more and more relevant as law enforcement becomes increasingly automated. The cultural landscape and legal system surrounding a robot policeman in, say, Dubai is very different to that in Beijing or San Francisco.

The rise of robocop

In each of these three locations robots are already being trained and trialled by local police services: Pal Robotics’ Reem machines in Dubai (in public liaison/information roles); Knightscope K5s in the Bay Area (which patrol malls, recording suspicious activity); and Anbot riot-control bots in China.

There is no basis for assuming that future AI police officers or applications will implement a form of blank, globalised machine intelligence without bias or favour. It is more likely that they will reflect the cultures and legal systems of the countries in which they operate, just as human police do.

And the world’s legal systems are far from uniform. In Saudi Arabia, for example, to be an atheist is to be regarded as a terrorist, and women have far fewer rights than men. In Iran, homosexuality is punishable by death, as are offences such as apostasy (the abandonment of religious belief).

It’s easy to assume that, in the real world, no one would design AI systems to determine citizens’ private thoughts or sexual orientation, and yet here’s an example of AI being deployed to predict if people are gay or straight, a programme that the article describes as an “advance”. Note, too, how quickly this system has been developed within the current AI boom.

Now factor in robot police or AI applications enforcing laws in one culture that another might find abhorrent. The potential is clearly there for technology to be programmed to act against globally stated human rights.

K5 on patrol in California

In the US, the numbers of people shot by police are documented here by the Washington Post, while this report suggests that black Americans are three times more likely to be killed by officers than white Americans. Meanwhile, this article exposes the racial profiling that occurs in some sectors of US law enforcement, despite attempts to prevent it. In the UK, statistics reveal that force is more likely to used against black Londoners by police than against any other racial group. This is the messy human world that robots are entering – robots programmed by human beings.

Politicians are increasingly targeting minority groups, or removing legal protections from them. In the US alone, recent examples include the proposed US bans on people travelling from certain Muslim-majority countries, and on transgender people serving in the military, along with the proposed removal of legal protections for LGBTQ people and the scrapping of the Obama-era DACA scheme. Russia is among several other countries to turn against LGBTQ citizens.

So might any future robocop perpetuate the apparent biases in the US legal system, for example? As we’ve seen, that will depend on what training data has been put into the system, by whom, to what end, and based on what assumptions. The COMPAS case study above suggests that core data can be tainted at source by previous flaws and inequalities in the legal system.

The limits of AI

But let’s get back to the technology itself. The UK-RAS white paper acknowledges that AI has severe limitations at present, and that many users have “unrealistic expectations” of it. For example, the report says: “One limitation of AI is the lack of ‘common sense’; the ability to judge information beyond its acquired knowledge […] AI is also limited in terms of emotional intelligence.”

Then the researchers make a simple observation that everyone rushing to implement the technology should consider: “true and complete AI does not exist”, says the white paper, and there is “no evidence yet” that it will exist before 2050.

So it’s a sobering thought that AIs with no common sense and possible training bias, and which can’t understand human emotions, behaviour, or social contexts, are being tasked with trawling context-free data pulled from human society in order to expose criminals – as defined by politicians.

And yet that’s precisely what’s happening in US and UK national surveillance programmes.

Opening the ‘black box’

The UK-RAS white paper takes pains to set out both the opportunities and the risks of AI, which it describes as a transformative, trillion-dollar technology, the future of which extends into augmented intelligence and quantum computing.

On the one hand, the authors note: “[AI] applications can replace costly human labour and create new potential applications and work along with/for humans to achieve better service standards. […] It is certain that AI will play a major role in our future life. As the availability of information around us grows, humans will rely more and more on AI systems to live, to work, and to entertain. […] AI can achieve impressive results in recognising images or translating speech.”

But on the other, they add: “When the system has to deal with new situations when limited training data is available, the model often fails. […] Current AI systems are still missing [the human] level of abstraction and generalisability. […] Most current AI systems can be easily fooled, which is a problem that affects almost all machine learning techniques.

“Deep neural networks have millions of parameters, and so to understand why the network provides good or bad results becomes impossible. Trained models are often not interpretable. Consequently, most researchers use current AI approaches as a black box.”

That last quote is telling: researchers are saying that some AI systems are already so complex that even their designers can’t say how or why a decision has been made by the software.

Conclusions

Organisations should be wary of the black box’s potential to mislead and to be misled, along with its capacity to tell people what they already believe – for better, or for worse. Business and government should take these issues on board, and the systems they release into the wild must be transparent – as far back as the first principles that were adopted before the parameters were specified. More, the data that is being put into these systems should be open to interrogation, to ensure that AI systems are not being gamed to produce weighted results.

Users: question your data before you ask an AI to do it for you, and challenge your preconceptions.

.chrism

• For more articles on robotics, AI, and automation, go to the Robotics Expert page.
• Further reading: How Google search data reveals the truth of who we are (Guardian).
• Further reading: Face-reading AI will be able to detect your politics, claims professor.

CMLogoSMALLEnquiries
07986 009109
chris@chrismiddleton.company

RSS
Follow by Email
Facebook
Google+
http://chrismiddleton.company/chris-blog
SHARE
Pinterest
Pinterest
LinkedIn