Chris’ Blog

Where technology and society meet: signal amidst the noise


• Chris’ blog archives can be found here (2016-17), here (2016), and here (2015).


25 September 2017: What’s Uber really driving at?

Agent of change – or lord of misrule?

On 22 September, Transport for London (TfL) refused to renew Uber’s private hire licence on the streets of the capital, saying that it had taken the decision on the grounds of “public safety and security implications”.

What “security implications” referred to is unclear, but the move was merely the latest in a series of legal tussles for Uber, which has battled authorities in cities throughout the world. Many of these have centred on a clash between what is billed as a ride-sharing app, but is used by millions as a taxi service with non-vetted (if user-reviewed) drivers.

Arguably, among Uber’s main achievements are the sidestepping of regulations that are designed to protect the public, while doing so with customer support and avoiding EU sales tax. That’s not a situation that authorities were going to rubber-stamp forever.

In Paris in 2014 and 2015, taxi company protests and attacks on Uber drivers led to the UberPop app being suspended there, but in many other cities, such as New York, the service remains popular.

TfL also cited Uber’s use of its Greyball technology, which has helped the company to evade investigators, according to a New York Times report earlier this year.

Yes we Khan?

Mayor of London Sadiq Khan said in a statement: “I fully support TfL’s decision. It would be wrong if TfL continued to license Uber if there is any way that this could pose a threat to Londoners’ safety and security.” General secretary of the Licensed Taxi Drivers’ Association, Steve McNamara added: “This immoral company has no place on London’s streets.”

Some have criticised Khan for a retrograde step, but he had little choice but to back his transport chiefs. Khan’s role shouldn’t include rubber-stamping the wishes of a US corporation, however popular its services may be with citizens. However, some have argued that TfL’s decision was itself political: the result of a concerted campaign by the Licensed Taxi Drivers’ Association, supported by – among others – Nigel Farage.

In London, Uber is used by up to 3.5 million passengers or registered users, the company claims, while the apps provide on-demand, flexible employment for 40,000 drivers, it says. However, Uber’s figures may need greater scrutiny, as 3.5 million is nearly half of the resident population of the capital.

The effective ban comes at a challenging time for both the capital and Brexit UK, as the country struggles to attract inward investment, maintain consumer confidence, and ready itself for the days when increasing numbers of citizens will rely on the gig economy to supplement their income as automation increases.

The company said it will appeal against TfL’s “Luddite decision”, and added, “Far from being open, London is closed to innovative companies”.

However, while popular in 600 cities worldwide (according to Uber’s own estimates), the company’s main innovation is mobilising other people’s assets to build a market presence that will ultimately dispense of their services.

Uber is among the many companies, such as Waymo (currently suing Uber for patent infringement and misappropriation of Google assets), Tesla, Ford, Mercedes, and GM, to be developing autonomous vehicle technologies. Uber’s long-term aim is almost certainly to build the infrastructure to summon driverless cars with a click. Much of that infrastructure exists already.

Any public transport service that’s based on escalating private car ownership in cities can’t be held up as a vision of the future. So accusing its opponents of being Luddites is wide of the mark – and Uber knows it. Its drivers are a medium-term means to an automated end.

Despite this, Uber’s general manager in London, Tom Elvidge, told the BBC that he was putting the company’s 40,000 drivers front and centre of the protest: “To defend the livelihoods of all those drivers, and the consumer choice of millions of Londoners who use our app, we intend to immediately challenge this in the courts.”

The gig conundrum

Many of those drivers rely on Uber for ad hoc, gig-economy income, but not exclusively; it’s not the case that all are now out of a job. Indeed, Uber’s insistence that they are is interesting: up until now, it has always claimed its drivers are self-employed.

A more persuasive argument in Uber’s favour is that it sweeps aside a bias that some have observed, anecdotally, in London’s taxi trade: a bias against ethnic minorities, both as drivers and passengers. And Uber will take you south of the river.

That said, some Uber workers have taken to social media themselves to criticise the company for, among other things, flooding the market with drivers to force costs down, and taking up to 30 per cent commissions. But at present, the howls of protest at TfL’s move are louder than Uber’s critics, which has weakened Khan’s standing in the capital.

Uber is seen as synonymous with the gig economy, so it’s worth considering its effect. Uber is forcing down the cost of services to end-users – not just within its own business, but also in its sector. Customers love it, just as they love free music and movies, but (understandably) they ignore the long-term repercussions:

While, over time, the opportunities to earn money from the gig economy increase – a good thing – the ability to earn a living from them dissipates. In this sense, the gig economy is emerging as a broad-spectrum scrabble for micro-payments, while automated businesses, such as Uber, rake in escalating profits. Welcome to the new 19th Century.

The music industry is the classic example. The network connects a musician with listeners worldwide – another good thing – while stuffing the channel with noisy middlemen and advertisers, leaving our globally-connected artist with an income of cents, if she’s lucky. This is because, like on-demand transport and information, music has become commoditised via the network effect. Any musician can earn more from busking for one hour in a small town centre than she can from 100,000 streams in the global village, Spotify.

Some policymakers are well aware of this phenomenon. For example, right wing thinktank Reform suggested earlier this year that mass automation in the UK public sector should be backed by gig-economy workers competing by reverse auction for ad hoc bookings (bidding to work for less and less money). In this case, the workers in Reform’s sights were teachers, doctors, and nurses. In this sense, the gig economy could be seen as a wholesale transfer of income and services to the many, but profits and power to the automated few. That sits uncomfortably in a peer-to-peer world.

To the future, then

Which brings us back to Uber. Whether TfL’s move is enforceable in the virtual world remains to be seen: some previous examples of bans or licence rejections have seen drivers continue to pick up work. Uber itself has long favoured a Wild West approach to business whenever it rides into town.

The challenge for legislators is that everyone loves a cowboy. TfL can refuse a licence, but it can’t mass-delete a popular app – unless app stores cease to approve it. Many Uber users now see the service as a consumer right, and as the most convenient, safest, low-cost transport solution in London – even if TfL believes their safety to be low on Uber’s list of priorities.

But if Uber is sincere in its determination to build a collaborative, gig-economy, on-demand future – rather than seizing market share at any cost – then it must learn how to collaborate itself. 

Uber CEO Dara Khosrowshahi acknowledged this when he admitted, “The truth is that there is a high cost to a bad reputation”. He said that he would work towards collaborating with authorities to make the company better: a welcome change of tone.

Put another way, a great app, a disruptive technology, and a popular service don’t excuse being a bad corporate citizen. Whether serial law-breaker or popular disruptor, it’s time for Uber to grow up. To criticise it isn’t to be a Luddite. The truth is, perhaps, much simpler: it had a licence and blew it.

But at the heart of this story is a good idea and a popular service. So let’s hope that a better version comes along soon, either via an improved, more contrite Uber, a competitor, or Wired’s suggestion of a new employee-owned service.

A Sky Data poll shows that more than half of people in the UK support Uber operating in the country. Meanwhile, the ride-share company has launched an online petition at Change.org, urging users to help save the service in London. At the time of writing, it has over 700,000 supporters. However, a random sample of responses to the petition reveals that a surprising number are signing from outside the UK – according to their stated locations – while the media is portraying it as a London protest. In an era of climate change, food banks, Trump, and nuclear Armageddon, it’s comforting to know that people around the world finally have something to get angry about. 

• This article was first published on Computing.
• For a compelling alternative view, try this blog from Chris Yiu.

.chrism

CMLogoSMALLEnquiries
07986 009109
chris@chrismiddleton.company

RSS
Follow by Email
Facebook
Google+
http://chrismiddleton.company/chris-blog/
SHARE
Pinterest
Pinterest
LinkedIn

September 2017: UK digital progress excellent, but SME support dying, warns techUK

Chris Middleton contrasts the positive vision of a new public sector report with some critical failings identified by the Civil Service itself.

Goodbye SMEs in government?

TechUK, the organisation for Britain’s technology innovators, has rejoined the political fray with a new paper on how the government can deliver its vision of digital transformation. The report, Smarter Services, has been produced by the organisation’s Public Services Board.

Earlier this year, techUK published an excellent manifesto for post-Brexit digital renewal. What’s different this time around is that the group has drawn its findings from the machineries of government itself: a 2017 survey of 948 civil servants, including 200 at C-level or above.

So what does the public sector think of its own tech record? Although 97 per cent of workers see technology as an enabler or necessity, 57 per cent of respondents believe that a shortage of internal skills is an obstacle to those benefits being realised – a significant increase from last year. Critical skills gaps exist in digital service design, procurement, change management, and data management/analytics. The last two are key drivers towards organisational efficiency, says the report.

It adds: “Senior civil servants and those working in digital roles had more confidence that their department had the requisite skills and capabilities to deliver its business plan than their juniors did.

“When asked to rate their department’s expertise in four key areas (digital service design, data, procurement, and change management), on average 20 per cent more civil servants in digital roles agreed that their department had the skills necessary to deliver its business plan. This could signify that while government has had some success in attracting expertise to its Digital, Data and Technology profession, these skills have yet to permeate the wider civil service.”

The logjam extends beyond the perimeter of government, says techUK, with 79 per cent of civil servants believing that current systems and working practices prevent citizens from interacting with the government more online – as there is a strong appetite for doing, says the report. So what’s the solution?

For many in public service, the answer is sharing more information. Removing the barriers to collaboration is an important route to improving citizen services online, according to 93 per cent of respondents. Thirty-six per cent of civil servants think legislation prevents them from sharing more, while a further 36 per cent believe that incompatibilities in internal working practices are the root cause.

Nearly one-third of respondents think moving more citizen transactions online is either too complex a challenge (19 per cent) or too expensive (13 per cent). Among senior civil servants, this rises to 43 per cent, perhaps suggesting that the business case for digital services needs to be made higher up the departmental food chain.

But when it comes to the vision for technology, techUK believes that the government is making the right noises – if one sets aside Whitehall’s criticism of end-to-end encryption, a bedrock of digital trust. “To change and to do so at pace” was how the then Minister for the Cabinet office, Ben Gummer, set out the vision for the public sector in early 2017, when he announced the government’s transformation strategy.

The report says: “This is a laudable vision, and one that the government has already made great strides towards. The UK is ranked as the best digital government in the world by the UN, and the £450m increase in the Government Digital Service’s budget made in the 2015 spending review signals the government’s intention to build upon this solid foundation.”

Slipping gears 

So the UK is making excellent progress, says techUK, even if the centuries-old Whitehall machinery sometimes slips its gears in its effort to move at the speed of technology disruption. Indeed, a core function of the Civil Service is to prevent disruption as different administrations come and go: a factor that should never be ignored by tech strategists.

The UN’s positive assessment of the UK’s digital programme was supported earlier this year by the 2017 Global Innovation Index (GII), an annual report published by the World Intellectual Property Organisation (WIPO) and two business schools. It rated the UK the best in the world for its overall use of ICT, as well as top in both e-participation and digital government. However, that same report slammed the UK for, among other things, its poor investment in education and in upgrading the national infrastructure. These failings are two further impediments to Whitehall’s digital ambitions.

The techUK report admits that the “challenge remains great”: “The National Audit Office has warned that government has so far struggled to make a success of end-to-end transformation of the sort envisaged by the Transformation Strategy. The disruption of core public services caused by the recent WannaCry cyberattack highlighted that the public sector remains a disparate and often difficult environment for transformation to flourish, with governance, risk, and skills shortages significant barriers to be overcome.”

Of course, the ‘successful’ WannaCry attack was also indicative of a government that failed to listen to repeated warnings, and whose sweeping programme of cuts forced departments to bypass essential OS upgrades.

Nevertheless, the techUK report is upbeat and optimistic, especially when it comes to the UK’s capacity to innovate: “Fortunately, the UK also benefits from one of the most vibrant and thriving digital economies on the planet. UK-based businesses of all shapes and sizes are pushing boundaries, not only in terms of digital innovation, but also of large-scale business transformation and change management.

“This knowledge and experience should prove a valuable resource for the public sector, and industry stands ready to be constructive partners in the transformation journey. techUK has been working hard to bring public and private sectors together to address these issues.”

SMEs vs. ‘the oligopoly’

So what of the future? The organisation makes a number of recommendations for how the transformation strategy can be made to work. It says the government should:

• Increase its willingness to experiment with new working practices
• Develop channels to fund and account for cross-government work
• Create common standards and working practices across departments
• Offer three-year placements in industry for civil servants in technical roles
• Provide all Fast Stream workers with digital skills training, and:
• Use public sector procurement to help foster innovation in the supplier community.

The latter we can file under ‘ambitious’, as the government’s procurement practices are surely part of the problem as much as they may be the long-term solution. Despite the efforts of former Cabinet Office Minister, Frances Maude, to wean Whitehall off its ‘oligopoly’ habit – to use Maude’s own description of the problem – the government has consistently swung back to the enterprise giants, whenever it has nurtured the green shoots of cloud-native or SME alternatives.

A good comparison is the scene in Ridley Scott’s The Martian where a critical systems failure causes Matt Damon’s thriving crops to wither in the perishing alien atmosphere.

The report concurs with this assessment: “Despite the government setting a target to spend £1 in £3 of its procurement budget with smaller and medium-sized businesses by 2020, only 21 per cent of civil servants believe that there is an appetite within their department or organisation to increase the involvement of SMEs in the procurement chain. There has been a particularly large drop (13 per cent) in the proportion of respondents working in tech-facing roles who agreed with the statement.

“While only one in ten of those involved in procurement decisions agreed that their department or organisation had access to a wide range of suppliers, less than a quarter picked widening the supplier base as a priority. 24 per cent do not believe they need access to a diverse range of suppliers, down 12 per cent since 2016.”

So the tide seems to be turning against SMEs within the public sector: a cause for concern post-Brexit, when the UK will have no choice but to nurture its home-grown talent. Reversing that trend will demand real leadership, but the government may feel it has more important things to do.

Conclusions

This government and the preceding two administrations have been consistently criticised for too tight a focus on cost over value, since last decade’s catastrophic recession. The report notes that, within the Civil Service at least, that culture is changing:

“The drive to deliver better services is increasingly seen as the primary driver of transformation within government rather than cost savings. More than twice as many civil servants see IT as critical to improving service delivery (78 per cent) than view it as critical to making cost savings (34 per cent).”

Both the challenge and the opportunity, therefore, lie in taking that message to the policymakers.

.chrism

CMLogoSMALLEnquiries
07986 009109
chris@chrismiddleton.company

RSS
Follow by Email
Facebook
Google+
http://chrismiddleton.company/chris-blog/
SHARE
Pinterest
Pinterest
LinkedIn

August 2017: Is AI automating prejudice?

Chris Middleton reports on how the flaws in human society are already being replicated – often accidentally – by artificial intelligence.

• This article has been quoted in London’s Evening Standard newspaper.

AI is the new must-have differentiator for technology vendors and their customers. Yet the need to understand AI’s social impact is overwhelming, not least because most AI systems rely on human beings to train them. As a result, existing flaws and biases within our society risk being replicated – not in the code itself, necessarily, but in the training data that is supplied to some systems, and in the problems that they’re being asked to solve.

Without complete data, AI can never be truly impartial, they can only reflect or reproduce the conditions in which they are created, and the belief systems of their creators. This report will explain how and why, and share some real-world examples. The need to examine these issues is becoming increasingly urgent. As AI, machine learning, deep learning, and computer vision rise, buyers and sellers are rushing to include AI in everything, from enterprise CRM to national surveillance programmes and policing systems.

Are people with tattoos criminals?

One example of AI in national surveillance is the FBI’s bizarre scheme to record and analyse citizens’ tattoos, in order to predict if people with ink on their skin will commit crimes. Take a ‘Big Bang’ view of this project (rewind the clock to infer what the moment of creation must have been), and it’s clear that a subjective, non-scientific viewpoint (‘people with tattoos are criminals’) was adopted as the core principle of a national security system, and software was designed to prove it.

The code itself is probably clean, but the problem that the system is being asked to solve, and the data it is tasked with analysing, are surely flawed. Arguably, they betray the prejudices of the system’s commissioners. Why else would it have been conceived?

In such a febrile atmosphere, the twin problems of confirmation bias in research, and human prejudice in society, may become automated pandemics: AIs that can only tell people what they want to hear, because of how the system has been trained. Automated politics, with a veneer of evidenced fact.

Often this part of the design process will be invisible to the user, who will regard whatever results the system produces as being impartial. A recent AI white paper published by UK-RAS, the UK’s research organisation for robotics and AI, makes exactly this point: “Researchers saw how machine learning technology reproduces human bias, for better or for worse. [AI systems] reflect the links that humans have made themselves.”

That’s the view of the UK’s leading AI and robotics researchers. So, is AI automating prejudice and other societal problems? Or are these issues simply hypothetical?

The racist facial recognition system

The unfortunate fact is that they are already becoming real-world problems, in a significant minority of cases. Take the facial recognition system developed at MIT recently that was unable to identify African American women, because it was created and tested within a closed group of white males. The libraries for the system were distributed worldwide before an African American student at MIT exposed the fact that it could only identify white faces.

We know this story is true, because it was shared by Joichi Ito, head of MIT’s Media Lab, at the World Economic Forum 2017. He described his own students as “oddballs” – introverted white males working in small teams with few external reference points, he said.

The programmers weren’t consciously prejudiced, Ito explained, but it simply hadn’t occurred to them that their group lacked the diversity of the real world into which their system would be released.

As a result, a globally distributed AI was poorly trained and ended up discriminating against an entire ethnic group, which was invisible to the system. That the developers hadn’t anticipated this problem was their key mistake, but it was a massive one.

Male dominance and insularity are big problems for the tech industry: in the UK, just 17 per cent of people in science, technology, engineering, or maths (STEM) careers are women, while in the West the overwhelming majority of coders are young, white males.

The UK-RAS report shares a similar example of societal bias entering AI systems: “When an AI program became a juror in a beauty contest in September 2016, it eliminated most black candidates, as the data on which it had been trained to identify ‘beauty’ did not contain enough black-skinned people.” Again, the humans training the AI unconsciously weighted the data.

The lesson here is not that any given AI or line of code is inherently biased – although it might be – it’s that the data that populates AI systems may reflect local/social prejudices. At the same time, AI is seen as impartial, so any human bias risks becoming accepted as evidenced fact. Most AI is a so-called ‘black box’ solution (see below), making it hard for users to interrogate the system to see how or why a result was arrived at. In short, many AI systems are inscrutable.

The legal dimension

Why are these risks so important to consider? Evidence is mounting that such data problems may have begun to automate bias within our legal systems: a real challenge as law enforcement becomes increasingly augmented by machine intelligence in different parts of the world.

COMPAS is an algorithm that’s already being used in the US to assess whether defendants or convicts are likely to commit future crimes. The risk scores it generates are used in sentencing, bail, and parole decisions – just as credit scores are in the world of financial services. A recent article published on FiveThirtyEight.com set out the alleged problem with COMPAS:

“An analysis by ProPublica found that, when you examine the types of mistakes the system made, black defendants were almost twice as likely to be mislabeled as likely to reoffend – and potentially treated more harshly by the criminal justice system as a result. On the other hand, white defendants who committed a new crime in the two years after their COMPAS assessment were twice as likely as black defendants to have been mislabeled as low-risk.

“An even stickier question is whether the data being fed into these systems might reflect and reinforce societal inequality. For example, critics suggest that at least some of the data used by systems like COMPAS is fundamentally tainted by racial inequalities in the criminal justice system.” Again, this is a problem of flawed data being fed into an application that is seen by its users as impartial.

Tainted data in a networked system

The problem of tainted data runs deep in a networked society. Some months ago, a journalist colleague shared a story with Facebook friends of how he searched for images of teenagers to accompany an article on youth IT skills.

When he searched for “white teenagers”, he said, most of the results were library shots of happy, photogenic young people, but when he searched for “black teenagers”, he was shocked to see Google return a disproportionately high number of criminal/suspect mug shots.

(Author’s note: I verified these results at the time. The problem is still noticeable today, but far less overt, suggesting that Google has tweaked its algorithm.)

The underlying point is that, for decades, overall media coverage in the US, the UK, and elsewhere, has disproportionately focused on criminality within certain ethnic groups. This partial coverage populates the network, which in turn reinforces public perceptions: a vicious circle of confirmation bias feeding confirmation bias. This is why diversity programmes and positive messaging are important; it’s not about ‘political correctness’, as some allege; it’s about rebalancing a system before we replicate it in software.

This extraordinary article on Google search data reveals how prejudices run deep in human society. (Sample quote: “Overall, Americans searched for the phrase ‘kill Muslims’ with about the same frequency that they searched for ‘martini recipe’ and ‘migraine symptoms’.”)

Human bias can affect the data within AI systems at both linguistic and cultural levels, because – as we’ve seen – most AI still relies on being trained by human beings. To a computer looking at the world through camera eyes, a human is simply a collection of pixels. AI has no fundamental concept of what a person is, or what human society might be.

A computer has to be taught to recognise that a certain arrangement of pixels is a face, and that a different arrangement is the same thing. And it has to be taught by human beings what ‘beauty’ and ‘criminality’ are by feeding it the relevant data. The case studies above demonstrate that both these concepts are subjective and prone to human error, while legal systems throughout the world have radically different views on crime (as we will see below).

Our systems replicate our beliefs and personal values – including misconceptions or omissions – while coders themselves often prefer the binary world of computers to the messy, emotional world of humans. Again, MIT’s Ito made this observation of his own students.

The proof of Tay

Microsoft’s Tay chatbot disaster last year proved this point: a naïve robot, programmed by binary thinkers in a closed community. Tay was goaded by users into spouting offensive views within 24 hours of release, as the AI learned from the complex human world it found itself in. Humour and internet trolls weren’t part of its training: that’s an extraordinary omission for a chatbot let loose on a social network, and speaks volumes about the mindset of its programmers.

However, the cultural dimension of AI was demonstrated by another story in 2016: in China, Microsoft’s Xiaoice chatbot faced none of the problems that its counterpart did in the West: Chinese users behaved differently, and there were few reported attempts to subvert the application. Surely proof that AI is both modelled on, and shaped by, local human society. Its artificiality does not make it neutral.

These issues will become more and more relevant as law enforcement becomes increasingly automated. The cultural landscape and legal system surrounding a robot policeman in, say, Dubai is very different to that in Beijing or San Francisco.

The rise of robocop

In each of these three locations robots are already being trained and trialled by local police services: Pal Robotics’ Reem machines in Dubai (in public liaison/information roles); Knightscope K5s in the Bay Area (which patrol malls, recording suspicious activity); and Anbot riot-control bots in China.

There is no basis for assuming that future AI police officers or applications will implement a form of blank, globalised machine intelligence without bias or favour. It is more likely that they will reflect the cultures and legal systems of the countries in which they operate, just as human police do.

And the world’s legal systems are far from uniform. In Saudi Arabia, for example, to be an atheist is to be regarded as a terrorist, and women have far fewer rights than men. In Iran, homosexuality is punishable by death, as are offences such as apostasy (the abandonment of religious belief).

It’s easy to assume that, in the real world, no one would design AI systems to determine citizens’ private thoughts or sexual orientation, and yet here’s an example of AI being deployed to predict if people are gay or straight, a programme that the article describes as an “advance”. Note, too, how quickly this system has been developed within the current AI boom.

Now factor in robot police or AI applications enforcing laws in one culture that another might find abhorrent. The potential is clearly there for technology to be programmed to act against globally stated human rights.

K5 on patrol in California

In the US, the numbers of people shot by police are documented here by the Washington Post, while this report suggests that black Americans are three times more likely to be killed by officers than white Americans. Meanwhile, this article exposes the racial profiling that occurs in some sectors of US law enforcement, despite attempts to prevent it. In the UK, statistics reveal that force is more likely to used against black Londoners by police than against any other racial group. This is the messy human world that robots are entering – robots programmed by human beings.

Politicians are increasingly targeting minority groups, or removing legal protections from them. In the US alone, recent examples include the proposed US bans on people travelling from certain Muslim-majority countries, and on transgender people serving in the military, along with the proposed removal of legal protections for LGBTQ people and the scrapping of the Obama-era DACA scheme. Russia is among several other countries to turn against LGBTQ citizens.

So might any future robocop perpetuate the apparent biases in the US legal system, for example? As we’ve seen, that will depend on what training data has been put into the system, by whom, to what end, and based on what assumptions. The COMPAS case study above suggests that core data can be tainted at source by previous flaws and inequalities in the legal system.

The limits of AI

But let’s get back to the technology itself. The UK-RAS white paper acknowledges that AI has severe limitations at present, and that many users have “unrealistic expectations” of it. For example, the report says: “One limitation of AI is the lack of ‘common sense’; the ability to judge information beyond its acquired knowledge […] AI is also limited in terms of emotional intelligence.”

Then the researchers make a simple observation that everyone rushing to implement the technology should consider: “true and complete AI does not exist”, says the white paper, and there is “no evidence yet” that it will exist before 2050.

So it’s a sobering thought that AIs with no common sense and possible training bias, and which can’t understand human emotions, behaviour, or social contexts, are being tasked with trawling context-free data pulled from human society in order to expose criminals – as defined by politicians.

And yet that’s precisely what’s happening in US and UK national surveillance programmes.

Opening the ‘black box’

The UK-RAS white paper takes pains to set out both the opportunities and the risks of AI, which it describes as a transformative, trillion-dollar technology, the future of which extends into augmented intelligence and quantum computing.

On the one hand, the authors note: “[AI] applications can replace costly human labour and create new potential applications and work along with/for humans to achieve better service standards. […] It is certain that AI will play a major role in our future life. As the availability of information around us grows, humans will rely more and more on AI systems to live, to work, and to entertain. […] AI can achieve impressive results in recognising images or translating speech.”

But on the other, they add: “When the system has to deal with new situations when limited training data is available, the model often fails. […] Current AI systems are still missing [the human] level of abstraction and generalisability. […] Most current AI systems can be easily fooled, which is a problem that affects almost all machine learning techniques.

“Deep neural networks have millions of parameters, and so to understand why the network provides good or bad results becomes impossible. Trained models are often not interpretable. Consequently, most researchers use current AI approaches as a black box.”

That last quote is telling: researchers are saying that some AI systems are already so complex that even their designers can’t say how or why a decision has been made by the software.

Conclusions

Organisations should be wary of the black box’s potential to mislead and to be misled, along with its capacity to tell people what they already believe – for better, or for worse. Business and government should take these issues on board, and the systems they release into the wild must be transparent – as far back as the first principles that were adopted before the parameters were specified. More, the data that is being put into these systems should be open to interrogation, to ensure that AI systems are not being gamed to produce weighted results.

Users: question your data before you ask an AI to do it for you, and challenge your preconceptions.

.chrism

• For more articles on robotics, AI, and automation, go to the Robotics Expert page.
• Further reading: How Google search data reveals the truth of who we are (Guardian).
• Further reading: Face-reading AI will be able to detect your politics, claims professor.

CMLogoSMALLEnquiries
07986 009109
chris@chrismiddleton.company

RSS
Follow by Email
Facebook
Google+
http://chrismiddleton.company/chris-blog/
SHARE
Pinterest
Pinterest
LinkedIn