A strong, clear signal in a world of noise.
For Chris’ 2016 blog, go here.
22 December 2015: As Apple intervenes in UK surveillance plans, one MP warns that it may already be too late
UPDATED Apple’s intervention in the UK government’s plans to introduce mass surveillance of private communications comes at an important time for civil liberties, not to mention for the UK’s – and the world’s – digital economy.
“We believe it is wrong to weaken security for hundreds of millions of law-abiding customers so that it will also be weaker for the very few who pose a threat,” said Apple, adding that the revised Investigatory Powers Bill could also weaken data encryption, interfere with Apple’s and other companies’ products, and force non-UK enterprises to break the laws of the countries in which they are incorporated. None of which would benefit the UK’s digital economy and reputation, let alone protect British citizens. Vodafone, 3, o2, and EE are among other companies expressing disquiet at the proposals.
However, one MP believes that it is already too late to scupper the plan. The Greens’ Caroline Lucas, MP for Brighton Pavilion, told me: “Judging by the response to the Home Secretary’s announcement, very few MPs will be opposing the Government – all the other opposition parties spoke in support of the new powers and said they were justified in the fight against terrorism.”
In a written statement to me about the Bill, she warned: “Changing the name of the legislation does not change the fact that it will actively undermine fundamental rights and civil liberties. When governments want to undertake large-scale infringement of the individual’s right to privacy – by allowing records of all our communications to be kept as a matter of course, for example – we need effective parliamentary scrutiny more than ever.”
Lucas promised to fight the plans and protect citizens’ right to privacy: “I will vote against the Investigatory Powers Bill and will also make every effort to amend it to be in keeping with, for example, EU rulings about the blanket retention of specific data and in relation to the Wilson Doctrine [which protects MPs from surveillance], something which I’ve taken the Government to court over.”
That the stakes are high should be clear from the stance taken by normally moderate commentators. For example, even stock market insights platform Seeking Alpha said of the government’s proposals: “Once the apparatus is created, the potential exists for it to be perverted in the future for… political control. The surveillance state has always been an essential component of the totalitarian state.”
That is an extraordinary statement for a publication that supplies investment guidance to make, but the implication is clear: the Bill threatens both civil liberties and the business models of global technology companies. At present, Seeking Alpha still rates Apple as a ‘Buy’, but that guidance is now heavily qualified.
In its report on Apple’s objections to the Investigatory Powers Bill, Seeking Alpha said that it believed it would become the model for the surveillance state globally in the future, and that companies such as Apple would be forced to create national data centres in order to comply with local data governance laws – effectively breaking apart the internet into national fiefdoms.
Meanwhile, any ban on end-to-end encryption would threaten products such as Apple Pay, it added, and drive encryption technology underground, where it would still be used by criminals.
My own objections to the Government’s proposals are set out in detail here, and they include the fact that one million children risk being investigated by the police as a direct result of the Bill, driving a coach and horses through the UN Convention on the Rights of the Child, the most widely upheld piece of human rights legislation in history.
Find out why in this article.
17 December 2015: Why the robot banking machine doesn’t work
News that Bank of America’s Merrill Lynch financial advice arm is to be joined by robots set the internet chattering recently, not least because Bank of America has been the world’s largest wealth-management operation since it acquired Merrill Lynch in the epicentre of the 2008 financial crisis.
There are delicious ironies in the news, as it was wealth mismanagement that precipitated the global crash, which brought Merrill Lynch to the precipice of bankruptcy seven years ago. Can robots do better than wreck the world financial system?
The robots in question are neither the cute NAO models favoured by Japan’s Mitsubishi UFJ bank, which earlier this year installed them in branches to greet customers; nor are they like the sundry humanoids serving in Nagasaki’s Henn-na robot hotel, which are there to slash HR costs and maximise profits. They – or rather, it – is a prototype automated investment platform. In 2016, it will be set to work on the sub-$250k accounts that are the preserve of the bank’s Merrill Edge wing.
The bank’s reasoning is not that the platform will outperform its human equivalents, necessarily, but that its very futuristic newness will help attract younger investors, in the same way that smartphone/app-only accounts delight customers who are too busy staring into their palms to risk human interaction. One financial analyst said that Bank of America wants to prevent young investors from getting “hooked” on a rival platform, which makes it sound like crack cocaine or an online casino – not to mention a tactical, rather than strategic, decision.
In short, it’s a disruptive technology, and disruption is apparently what a financial sector that used to be about such things as stability, probity, and the tedious predictability of compound interest needs. At least, that’s the opinion of the bankers whose quest to minimise risk on a micro level seems to be accompanied by a thrill-seeking ramping-up of macro risk so that the ultimate beneficiaries, offshore investors, never get bored.
The lemming-like culture of modern banking was demonstrated when one response to the plans, from rival bank Wells Fargo (which has the most human skin in the game), was that they were: “A real threat to our business, because we are disproportionately full-service, high-value-added, person-to-person activity,” according to CFO John Shrewsbury.
Wells Fargo knows all about threats to its business: parent company Morgan Stanley reportedly lost up to 80 per cent of its market value in 2008. We can infer from Shrewsbury’s comments that the sector’s tactical response to automation may be to become less full-service, less value-added, and less focused on people, which would seem to be missing the lessons of the financial crisis.
But the big question is: will robots work in finance? In many ways, it’s an absurd question to ask, because the core condition of any robot is that it follows instructions: algorithms, written by human beings, that produce pre-defined outcomes. The desired outcome of most investment advice is “make money”, which forces onto customers the same binary choice that faced the banks pre-2008:
• Boring option: make safer investments that, historically, have been shown to grow predictably over a long timescale (with the caveat that Western economies are in recession one-third of the time). This option tends to favour old industries, such as oil, tobacco, pharmaceuticals, and weapons, and is the reason why change is so hard to effect in Western economies.
• Exciting option: gamble, and hope that a more risky investment pays off spectacularly. This tends to favour new technologies, and buying the debt of people who can least afford to repay it. (Just ask Merrill Lynch, Morgan Stanley, or the population of Greece about the latter.)
The human machine
As I explained in a previous report on robotics and automation, banks’ talented, skilled, empathetic human employees are already functioning like robots, because the companies for which they work have become giant compliance systems.
As anyone knows who has sought financial advice recently, or tried to open an account in a physical branch, you’re talked through a series of slides and tick boxes, the output of which is ‘Yes’ or ‘No’. Bank employees are forbidden to step outside of those algorithm-like instructions and use their human intuition. If the computer says ‘no’ then the answer really is ‘no’. That’s precisely how robots work, so those processes may as well be automated. In time they will be (in a sense it’s insulting to everyone involved to give such decisions a human face).
So the hidden story of robotic automation is that more and more industries will ask their human employees to behave in a machine-like way as stage one in a long journey towards automating those tasks. In any industries that choose to adopt machine-like behaviour, machines will win.
But back to banking. It stands to reason that the outcome of increased automation can only be as good or as bad as the thinking behind the algorithms. Put simply, robots will either succeed or fail faster, cheaper, and more efficiently, which will at least lower the organisation’s costs. But that is not the same thing as mitigating against risky behaviour.
A new report
News of the innovation comes at an intriguing time: publication of a detailed report, also from Bank of America, on which disruptive technology is the most likely to be profitable (for banks). The company identifies three broad trends: the Internet of Things (IoT), what it calls “the sharing economy”, and online services. Robotics links all three, which to any bank taking a tactical macro view means: “robots make money”. At least, in a sector in which everyone does what everyone else does and no one challenges the assumptions on which group decisions are made.
The report says: “We have entered a period of accelerated innovation, made possible by the confluence of many complimentary [sic] enabling technologies required to change a business model. Think of it like this: big technology innovations often require many smaller technology advances first. This is the ‘building block’ approach to disruption, and we believe the majority of building blocks are now in place.”
But what might the impact of robotics be on human employment, given that, in banking at least, human employees are becoming removable cost centres in what is, quite literally, now a money-making machine? (One that, in order to make more money, may focus less and less on adding customer value, as we’ve seen.)
The report warns: “Robotics could worsen labour market inequality: the number of industrial robots is up 72 per cent in the past 10 years while the number of US manufacturing jobs is down 16 per cent… Technology poses major government policy issues with regard to income inequality, privacy, and cybersecurity.”
The report also states the bank’s belief that “tech” is a “micro-deflationary” force, but not “macro-deflationary”, and adds:
“Innovation and the disruptive role of technology is a theme-rich cluster as the corporate and government world shifts from one of ‘too big to fail’ to one of ‘too big to succeed’ via ecommerce, esecurity, robotics, and genomics. Today’s technological disruption is universal, and trends in data transportability, cloud wars, the generational shift away from ownership toward experience, wearable tech, and crowdfunding, will create substantial divergence in investment winners and losers in coming years.”
Despite all this, the report claims that the historic argument for technology-driven unemployment is thin, and it proposes the performance of the US economy as proof. In the US, technology disruption is both endemic and central to economic prosperity, and yet overall employment stands at 94.5 per cent. This may be true, but the bank’s ‘building block’ analogy of a gradual accretion of discrete, disruptive advances may also mean that the same building blocks are being put in place for rising unemployment.
The problem with the whole debate about robots is that, over time and until the birth of true artificial intelligence, robots are merely low-cost human proxies, a means to take cost out of large organisations and maximise the economic benefits internally to fewer and fewer people.
As more and more money flies offshore, the economic benefits increasingly flock towards offshore investors. In the long term, the logical endpoint of this process – if you follow the money – is some companies becoming largely automated entities serving the financial interests of small groups of mega-wealthy individuals in tax havens. Perhaps this is where the current model of capitalism has been headed all along: an endpoint, rather than an ongoing process of free-market entrepreneurship and trickle-down benefits for all.
The reality is ‘trickle up’. In the UK, the gap between rich and poor has been increasing significantly for several years, with the richest one per cent of the populace now owning more wealth than the ‘bottom’ 57 per cent combined, according to this report. Both The Guardian and Independent described the last Budget as a wholesale transfer of wealth from the poor to the affluent – and much of that wealth is ultimately based offshore.
According to Reuters, the seven largest investment and corporate banks operating in London paid a combined £21 million in corporation tax in 2014, despite generating UK revenues of $21 billion and profits of £3.6 billion. Five of the banks – JP Morgan, Bank of America Merrill Lynch, Deutsche Bank AG, Nomura Holdings, and Morgan Stanley – said that their main UK arms paid no corporation tax at all.
So why aren’t prices falling more generally in society at large, so that other human beings can benefit from all this wealth generation too? Because price deflation is the enemy of any capitalist system. Technology itself is commoditising, of course (unless you’re buying a lifestyle or aficionado brand, such as Apple), but the services available through your phone, tablet, smart TV, or games console are generally getting more and more expensive.
For example, most movie downloads are no cheaper to buy (or rent) than a DVD or a Blu-Ray, despite the fact that the costs of manufacturing, packaging, and distribution have been stripped out. In real terms, therefore, the cost of that content is rising, not falling – unless you get it free from a streaming service, along with reams of intrusive adverts.
Of course, in some sectors such as entertainment and the media, ‘content’ is sometimes given away free, which has the effect of stuffing every digital channel with noisy intermediaries, middlemen, and advertising. In those cases, content is being positioned merely as an incentive to subscribe to a networked service, and gaining access to people’s personal data is the advertiser’s incentive on the other side of the deal (regardless of whether those people are actually customers, thanks to the networked effect of people sharing their contacts and friends lists on social platforms).
But prices in general tend to be kept artificially high in order to maximise the profit potential inherent in reducing production costs, increasing automation, and outsourcing to countries with low labour costs (where automation will also increase to combat rising wages).
A breaking apart
So the key thing missing in all such reports on robotics and automation is this: while overall employment may not be falling – yet – and while many large organisations (such as banks) can increase their profits through automation, the ability of more and more human beings to make a decent living outside of these increasingly automated megastructures is collapsing.
This forces more and more people in society at large to pick up ad hoc income from the breaking apart of other industries, such as by becoming an Über driver, or by putting their apartment on Airbnb, or by offering their freelance services via a cloud platform (often for micropayment rates). This pushes down not only the end-user costs of service-based activities, it also reduces their perceived value for potential clients and strips out regulatory protections.
Sooner or later, the taxman will clamp down on all those ad hoc, citizen-generated, peer-to-peer activities – just as he has on the thousands of small businesses that are based on eBay – while the robotic megacompanies continue to move their money around the world to avoid paying tax to national exchequers.
In the long run, this means that the people who are paying for the maintenance of local or national services will be, increasingly, the majority of the population who can least afford to do so. If you want proof, consider this. The average UK citizen has an annual personal tax bill of £4,985. In the UK, Facebook paid £4,327 corporation tax last year, while – as we have seen – five of the largest investment banks in the UK paid no corporation tax at all.
In an environment in which a UK citizen with average earnings pays more tax than a billion-dollar enterprise, more and more people will try to make money by offering services privately via global cloud platforms, such as Facebook, playing cat and mouse with the local tax authorities.
Unfortunately, this also means that the industries that are most resistant to automation – all those definitively human activities that include writing, designing, making music, and so on – are now regarded (as I said in my recent blog entry on Pandora, below) as low-paid support acts to a rapacious advertising sector.
Increasingly, makers are expected to provide their services for nothing, because (in the immortal words of the PR industry) “it’s good for your profile”. Put bluntly, the money-making potential of rising numbers of activities that can’t be automated is falling, unless your clients are wealthy dilettantes on the one hand, or wealthy aficionados on the other.
So automation really isn’t giving us all the free time to be creative, imaginative people, sitting on mountainsides having Platonic thoughts about the nature of reality. There’s very little money in it, and over time there will be less and less. My advice? Learn robot maintenance, make bespoke goods for billionaires, and move to a tax haven. But whatever you do, don’t take up gambling.
In the meantime, governments must hold the banks to account and force them to pay for the damage they did to the global economy, and not punish those people who are least able to look after themselves locally. There really is no alternative.
• An earlier version of this article appeared on Diginomica.
• Further reading: https://www.rt.com/uk/325595-scientists-robots-jobs-ethics/
16 November 2015: Bugs in the empathy code
We’ve all been shocked, saddened, and disgusted by the attacks in Paris, which can be seen as armed assaults on life, love, music, liberty, multiculturalism, and more, in a city that’s renowned not only for its beauty and zest for life, but also for its centuries-long contributions to philosophy, art, and political discourse.
As I reported in my previous blog [15 November, below], a friend of a friend is a victim of the Paris attacks, and he’s now fighting for life in hospital. The wounded man is Moroccan, and he may wake up in a world in which some see him as a potential threat, and not as an innocent victim and Parisian.
Which brings me to my point. Over the weekend, Facebook produced its ‘French flag app’, which allowed millions of us to express our solidarity with the victims in red, white, and blue. It’s the latest example of ‘empathy code’ and it follows Facebook’s rainbow flag app earlier this year, which enabled users to show their support for LGBT rights and equal marriage (over 26 million people did just that, which was inspiring to see). Simple, heartfelt, effective.
What about the others?
But while it’s been wonderful to witness the outpouring of love and support for France – including from Muslims, who are by far the biggest victims of this extremist violence worldwide – we should remember that other cities and communities have been attacked too. Saying this doesn’t somehow ‘water down’ our support for our friends and neighbours in France, or our shock and sadness at the Paris attacks, and the widespread criticism of the ‘flag app’ isn’t some kind of hipster assault on the ethics of grieving.
The criticism is of the markedly different way that Paris was dealt with by a ‘global’ technology platform. An attack on Beirut was carried out by the same group on the same day, and 44 people died in it and scores were wounded, so any technology platform that says it is global – as Facebook does – should acknowledge that in exactly the same way.
In Nigeria, hundreds have perished in extremist violence, including in attacks on mosques and firefights that closed 150 schools and led to child abductions, while a single attack in Pakistan killed 130 children. The list goes on and on. ‘Global’ platforms should reflect these things too, otherwise they’re not as global as they claim to be.
Google UK carried a black ribbon as France held three days of national mourning, but there was no reference to the Beirut atrocity on the Google Lebanon home page; perhaps locally it would be seen as disrespectful. And while Facebook was rightly praised for the swift implementation of its ‘safety check’ app for French users – a great idea – it was criticised for not creating similar features for other countries, specifically Lebanon on the same day.
The absence of any Facebook app that allows users – including those in the Middle East, South Asia, or North Africa – to express sympathy with victims in non-Western countries is both interesting and troubling. While there’s no evidence that Facebook is preventing it as a matter of policy, it’s supposed to be a global platform, which means that it should not only have a global reach, but also a global perspective, however much we use it to support our own loved ones and neighbours.
Each of us grieves selectively, of course we do; I grieve for Paris, the city I know and love. I worry about my friends there and about their young families. But a global technology platform can’t be quite so ‘partial’, or appear to be so.
Might Facebook be worried that producing apps that support victims in (for example) Lebanon, Nigeria, Kenya, or anywhere else, could alienate Western advertisers? Or leave it open to criticism that it must be supporting any regime for which it produces a flag? I don’t know – and I’m not suggesting that’s the explanation. But, at the very least, the flag-shaped hole in Facebook policy reveals the problem of creating a global platform from a Californian office block.
Facebook needs to grow up
It’s conceivable – if unlikely – that Facebook simply doesn’t understand that people might have friends, family, colleagues, or roots in a country where Mr Zuckerberg hasn’t exited a plane looking like a sophomore telling himself a joke. If you want to be global, Mr Zuckerberg, be global.
Alas, Facebook’s understanding of human beings often seems faulty. This is the company that pushes your own ‘year ago’ memories at you daily, even if you tell it to stop, on the assumption that they must all be happy ones. Perhaps Facebook believes that – mass murder and atrocities aside – its users live in a perma-happy world of non-stop, US-centric fun, and that every memory of ‘a year ago today’ must contain a kitten, a flat white, or a snapshot of an expense account.
In my case, today’s algorithm-selected memory from 2014 was of a dear friend who had passed away – accompanied by party graphics and a patronising message. Ironic, isn’t it? I would have preferred to remember him in my own way, but I wasn’t given the option.
Any social platform that believes that all memories are happy – and that all grieving must be in accordance with Western sentiments – is storing up a world of problems for itself.
Don’t get me wrong: Facebook has a tough job and is doing it very well; its success has been the most extraordinary story in 21st Century business. In my case, it helps me keep in touch with dozens of people – indeed, it told me that all my Parisian friends were safe, for which I’m extremely grateful.
But with a claimed 1.5 billion users worldwide, why does no one internally at Facebook ask obvious questions about its decisions? That’s where its core challenge now lies. Facebook simply doesn’t know how selective it’s being – even if we might concur with its selections in our personal lives and in our own private experiences. To put it bluntly, Facebook needs to employ some cynics.
Mounting criticism suggests that it needs a change of perspective soon – at least, if it genuinely wants to be a global platform. Otherwise it risks alienating as many people as it empowers.
15 November 2015: The night train (some thoughts on terror)
Many years ago, I had one of the few experiences in my life that was genuinely frightening (I don’t scare easily and keep a cool head in dangerous situations). I was still living in London, had come down to Brighton to visit a friend, and I found myself getting the last train home on a Friday night just as the pubs were shutting. It was one of the old trains with the ‘slam doors’ and, thanks to some problem or other, there were only four carriages. And so it was that hundreds of drunk, rowdy, aggressive lads all piled on immediately before we left: some even climbed into the overhead luggage racks.
Almost as soon as the train had pulled out of the station, a core group of lads (there were 20 or so in my carriage alone) turned violent as we picked up speed and hurtled into the night, into the tunnels on the edge of the city. They were screaming and shouting, and began smashing all the lights, one by one, and breaking the windows. So it was getting darker and darker, and noisier and noisier, and wind was rushing into the carriage through the broken windows. A couple of lads even opened one of the doors and left it hanging open as the train hit 90mph.
Journey into darkness
The lads began terrorising other passengers, picking fights with anyone who looked at them or spoke up, and it was a real possibility that someone might be thrown out of the speeding train. I did the only thing possible in that situation, bearing in mind I was sitting on my own, vastly outnumbered by angry, aggressive males on a pitch-dark speeding train with broken windows. I took a deep breath, centred myself, made myself look big, raised my head and stared straight ahead of me, but with my eyes slightly down (not at the eye level of anyone in front), rooting myself to the spot in my seat, making sure that I became an immovable object in the chaos. I also thought about my friends, about life, love, music… all the things that make life worth living: focusing on the future.
Survival rested on sitting tight, looking strong, staying calm, being motionless, and not losing my head under any circumstances. And when the train pulled in at the first stop – 15 minutes into the journey – the lads got off and disappeared, leaving a trashed train behind them.
So why do I share this? Because right now, the world feels like that train: a train being driven into the night and smashed up by madmen. Not only Paris, but Lebanon (attacked by the same group on the same day), New York, Syria, Spain, Palestine, Brussels, Russia, Nigeria, and the warzones from which thousands are fleeing every day in fear of their lives.
The lesson I learned is: Don’t make a rash decision. Don’t become a different person. Don’t become another madman in a crowd of madmen. Don’t let them set the agenda for you. Don’t shout and scream and join the chaos. Centre yourself. Stay calm. Stare straight ahead. Gather strength to yourself. Don’t give them what they want. And remember all the people you care about.
Why? Because we’re all on the same train and, to the best of our knowledge, there is no one else out there in the dark. The madmen want what they always want: war, collision, violence, noise, the imposition of ideas by force. They want us to turn on them – and to turn on ourselves and our fellow passengers, and, by doing so, to make our enemies more numerous and more powerful.
• Please read: News from Paris
In that same spirit, my friend Guy in France posted this update yesterday on Facebook, which he has let me share:
Guy wrote: “Friends all over the world, I have learned this evening that a colleague and dear friend was among the victims in Paris last night. He was hit by three bullets and, as I write, he is in an artificial coma after 10 hours in surgery. I will not name him here, out of respect for the privacy of his family.
“I am tonight full of fear of the aftermath of this declaration of war in France. And I am terrified in particular because extremist voices are already fuelling fires of hate, proclaiming that everyone should beware of immigrants and Muslims, etc. My friend is Moroccan. ‘They’ shot him last night. ‘They’ didn’t ask of his origins or beliefs. And whilst I pray for him to survive, I fear already that in the weeks and months to come, he will be an innocent victim again, at the hands of those who will stigmatise him because he, in their words, ‘looks like a terrorist’.
“So, friends, beyond your compassion, your flags of solidarity, please, I beg you, be vigilant. Now, more than ever, stand up against ALL extremism, ALL ignorance, ALL hatred. My friend will need you. France will need you. The world can only prevail on these terms.”
This morning, Guy added: “He was the only survivor of his group of friends outside the Carillon bar. He is fighting, hanging in there.”
5 November 2015: Why the surveillance state will increase crime, undermine trust in the UK’s digital economy, and put one million children at risk
An updated version of this blog entry is available separately here, should you wish to link to it.
The British government has published proposed revisions to the Investigatory Powers Bill, which retains elements of the so-called ‘Snooper’s Charter’. If it secures Parliamentary approval – which is not certain – the Bill will be significant because it mandates telecommunications providers and ISPs to retain all customers’ phone and internet data for up to a year. Ministers, not judges, may have the final say in whether to seize records.
So, beyond the vital questions of whether the scheme is either acceptable or viable – which are explored in detail below – a key question must be: are ISPs and telecoms providers ready for it, given that they’re being placed in the front line of the government’s plans?
In the wake of the TalkTalk hack just over a week ago, in which unencrypted data from 157,000 customer accounts was stolen, the answer should be clear: no. And if it proves to be the case that a group of children were behind that attack (police have reportedly questioned a 15-year-old boy and other teenagers), then that answer should be shouted from the rooftops of Westminster. In total, four million customer account details were at risk.
The hack of the provider’s website demonstrated two things. First, that customer records and account information are irresistible to criminals in a world in which private data is a de facto currency. And second, that there is no legal obligation on internet companies, telcos, or cloud platforms to encrypt it.
However, businesses can breathe a sigh of relief that the amended Bill will not (at this stage) seek to ban the use of end-to-end encryption – a proposal that would have driven an ideological tank through the UK’s digital economy. But if such a proposal were to be pushed through Parliament, it would bring Whitehall into head-on conflict with technology companies, such as Apple, BlackBerry, Microsoft, and many apps providers, which are increasingly building encryption into the heart of their offerings.
Inevitably, the Bill will drive more people towards privacy-shielding apps and devices, while reducing citizens’ trust in their own government. Post-Snowden, the lesson must be that national surveillance programmes never achieve what they set out to do, which is to reinforce trust and security; their effect is always the opposite.
But back to TalkTalk. The company’s CEO Dido Harding went as far as claiming the Data Protection Act itself as her defence against widespread criticism of account data being stored ‘unsalted’. The Act fails to make data encryption a legal requirement, allowing providers like TalkTalk to hide behind its poorly worded terms.
So this all-too-public failure of a major communications provider to secure customer data should serve as a warning to Whitehall: it will be open season on the internet records of UK businesses and private citizens should the amended Bill pass through Parliament, because their allure will be irresistible to criminals.
ISPs, telcos, mobile providers, and others, will be expected to secure all that data from day one, even though there is no legal obligation on them to encrypt it. And as dozens of separate private sector entities, their response will inevitably be piecemeal, rather than a co-ordinated national strategy.
By – in effect – leaving it up to the market to decide how best to react to a centralised programme of national intelligence gathering, the government’s plan has the makings of being a dangerous ideological gamble with citizens’ personal security. Common sense suggests that these facts alone risk increasing cybercrime and decreasing national security, rather than bolstering the government’s fight against terrorism, organised crime, and abusers.
• Lording it over the Bill?
Away from the TalkTalk boardroom, Dido Harding is Baroness Harding of Winscombe, a member of the House of Lords who has – ironically – stated her personal determination to make the Web safer for all, especially for women and children.
In February this year she said in an interview with ThisIsMoney.com, “’We’re a democracy and so you want Parliament to make these decisions. In the course of the next couple of years we’re going to have a national debate about it. I want to be part of that [debate], but, my goodness, as an internet service provider chief executive I shouldn’t be the one making the decision, society should.”
Society’s opinion, however, hasn’t been sought. And Harding’s words also ring hollow because her company has failed to protect customer data from malicious intrusion and theft. So it will be interesting to see whether she speaks out in favour of the Bill or opposes it – a classic case of a company being forced to decide which is more important: shareholder value, or taxpayer value?
But will the amended Bill itself protect vulnerable children – as the Home Secretary claims, and as Harding wants? Again, no. Quite the reverse.
• Increased risk to children
A programme of national surveillance will massively increase the risk of harm to the UK’s children, for three simple reasons.
First, children’s rights to privacy and free association are protected under Articles 13 to 17 of the UN Convention on the Rights of the Child, so the British government would have no choice but to seek a national security exemption against the Convention – the most widely upheld piece of human rights legislation in history – given that any national data-mining exercise would inevitably gather data about children’s internet activities as well as those of adults.
(The risks of data-mining algorithms, machine-based decision-making, and surveillance are explored here.)
Second, child-protection workers, teachers, children’s charities, anti-bullying organisations, and other childcare professionals all tend to agree on one thing: that once children reach adulthood at 18, they should be given the right to have their childhood internet activity ‘forgotten’, so as to prevent their own data from causing them problems later in life, such as with prospective employers. The Bill makes that impossible, and therefore acts against the professional opinion of the very people whose job is to protect children.
Indeed, the danger of all citizens being ‘oppressed by their own data’ has been significantly increased by the scheme.
And third, for another important, but discomforting, reason: by far the largest distributors of explicit images of people under the age of 18 are minors themselves. This is the phenomenon known as ‘sexting’ – the sharing of self-made explicit texts, images, and videos – which a number of reports suggest affects up to 25 per cent of all teenagers, with 80 per cent of those involved being under 18.
So let’s do the maths. There are five million teenagers in the UK (according to government statistics). If up to 25 per cent of them are ‘sexting’ (as surveys have suggested), that’s 1.25 million teenagers. If 80 per cent of those involved are under 18 (as surveys have found), then one million children could be at risk of prosecution under anti-p*rn laws as a direct result of a national surveillance programme.
Scaremongering? No. Prosecutions have already taken place in both the UK and the US, where teenagers – including some aged over 16 who have been in legal, consenting relationships – have been convicted under anti-p*rn laws for sending pictures of themselves to their partners, because they are still minors. Any explicit image of someone under the age of 18 is illegal, so the two-year gap between the age of consent (16 in the UK) and the age of majority (18) is proving to be a legal minefield in the mobile age.
In the US, a Carolina teenager was recently convicted of being both the perpetrator and the victim of this crime, for having an explicit image of himself on his own mobile. There’s no evidence he sent the image to anyone, or behaved illegally in any other way, but he is now regarded as having committed a sex crime. Under UK law, any minor who ‘sexts’ – and surveys suggest there are one million of them – would be regarded no differently to the abusers that the Bill is designed to catch, and they would be added to the same register.
What we, as adults, think about the morality of ‘sexting’ is largely irrelevant, as the UK’s National Crime Agency (NCA) admits. In a recent interview, Zoe Hilton, head of safeguarding at the NCA’s CEOP Command said, “With smartphones and tablets, and new apps emerging all the time, this behaviour is becoming quite normal for teenagers.”
The conclusion is inescapable: the amended Bill may – demonstrably – criminalise thousands of young people, and one of the government’s stated reasons for ramping up national surveillance is to protect children.
• The technical challenges
Then there are the technology challenges of an overt national surveillance programme, given that an internet that facilitates mass surveillance is an internet that is less secure and makes criminal acts much easier to commit.
IP addresses don’t identify people – nor, in many cases, even devices. IP addresses can be bought and sold – as was evidenced by the British government recently selling off unused IP addresses to, among others, Saudi Arabia.
IP addresses don’t even pinpoint geographical locations: they can be cloaked; they can be rented off the shelf via (legal) IP-hiding and proxy server platforms that allow users to appear to be anywhere in the world, from Colorado to Korea; and they can be absorbed into malicious botnets without the user having any idea that their computer is being used to send spam or viruses, or carry out Denial of Service (DoS) Attacks.
Just as significantly, there will soon be untold millions of new IP addresses, thanks to the Internet of Things (IoT): all those smart devices that will be coming online over the next few years under the extended addressing made possible by IPv6. A tsunami of data, in fact, which the police almost certainly lack the resources to deal with.
The IoT itself will pose challenges on an unprecedented scale. The security of smart devices, from lightbulbs to cars, fridges, and environmental control systems, has been found wanting in many tests, including some carried out recently by IBM, which used an MP3 file to remotely disable a car’s brakes, and a building’s HVAC system to gain access to corporate wifi credentials.
What all this means is that just because a device might appear to be in a specific real-world location, such as an office or house, and has been used in an illegal or ‘suspicious’ act does not constitute reliable evidence of any wrongdoing on the owner’s behalf.
More, that a device might have been used in ‘suspicious’ or illegal circumstances does not in itself constitute evidence that the owner has committed a crime, given that anyone might have used the device – including a child, as we’ve seen – or hacked into it.
Surreal and absurd though it may sound, the IBM tests prove that it’s entirely possible that a hacker could gain control of a household’s broadband account – and any computer connected to it – via an insecure smart lightbulb, for example. And as more and more IoT devices are rushed to market to capitalise on the first wave of consumer interest, it’s inevitable that the security of many of them will be poor to non-existent.
In short, the technology challenges inherent in any meaningful monitoring and data-mining of internet records to identify criminal behaviour are so vast and so complex as to make the whole scheme nonsensical. That could mean thousands of false positives and a colossal waste of police resources.
Meanwhile, it goes without saying that organised criminals, terrorists, and pornographers are the very people who will have the skills to evade blanket surveillance. In fact, most ‘millennials’ or ‘digital natives’ who have grown up with the internet already have the innate skills to sidestep surveillance, while anyone can buy a pay-as-you-go phone, or a phablet from a secondhand store and put in an untraceable SIM.
And there’s another consideration, one that almost no one talks about. Someone in a back room somewhere will be asked to write algorithms to identify ‘suspicious’ patterns of behaviour, or ‘suspicious’ website visits and trigger words, because it stands to reason that human beings will have neither the time nor the resources to monitor an entire nation’s internet activity 24 hours a day, 365 days a year.
Who writes those algorithms, based on what rules, what keywords, and what concepts of ‘suspicious’ behaviour is an important question, and the fact that politicians, not judges, will have the power to hit ‘Enter’ on investigating that activity suggests that those decisions may have a worrying political dimension. With no digital bill of rights in place to give citizens legal protection against being (to use analyst Ray Wang’s phrase) “oppressed by their own data”, this may create serious problems in any society that’s driven by powerful political ideology.
• The business angle
Now, beyond the alarming lack of preparedness of some ISPs and telcos, few of these unfortunate side effects of the amended Bill will have a direct impact on business, perhaps. However, three things certainly will: the rise of Bring Your Own Device (BYOD) schemes and ‘shadow IT’ in business; the growing grey area between ‘business’ and ‘private’ use of technology, and any lack of trust in the UK’s digital economy and in its technology providers.
The first two can be taken together: the lack of any meaningful distinction today between business and consumer IT. Increasingly, employees use their own devices for business (aka ‘BYOD’), and informally use their own cloud applications and storage facilities within the business (aka ‘shadow IT’, or ‘Bring Your Own Cloud’ [BYOC]). This shifts their private internet usage into the corporate realm.
Many of those employees also work from home or wherever there is free wifi. This has the reverse effect: it shifts corporate internet use into public and private spaces.
The use of encrypted communications, proxy servers, and other tools by enterprises is central to the successful running of the digital economy, especially in areas such as financial services, defence, and any business that revolves around intellectual property – which, in this day and age, is most business. In short, these technologies are core to the UK’s economic health.
But many private individuals are also ‘companies’: self-employed people, freelancers, and so on – anyone who uses limited company status for business, in fact. Will their use of encrypted communications be accepted by the government, or does it only apply to large enterprises? If so, why should large businesses be allowed to encrypt communications, but not anyone else? Are they somehow inherently more trustworthy than the average British citizen? And what about all those flexible workers who are working remotely from home and accessing core enterprise systems through secure private networks?
This whole area has so many pitfalls that, again, it may prove to be completely unworkable, because thousands of law-abiding citizens will be using encrypted communications at home in the normal course of their working lives.
Which brings us to trust.
• Why trust is essential
For our digital economy to succeed, citizens need to trust that digitised governmental services have their best interests at heart. Customers need to trust that their service providers can protect their private data. And private companies need to trust that the UK is a safe place to do business. In each of these areas, the amended Bill falls woefully short and puts that trust at risk.
The impression given by the proposals is that politicians and inexpert officials are rushing surveillance plans through, rewriting and amending them on the fly, with little consideration of the real-world consequences that may follow – and they are doing so against the advice of most technology providers, let alone civil liberties campaigners.
There is also a suspicion that security services’ covert surveillance of UK citizens was illegal, and so that programme is now going ‘above ground’ in the form of the amended Bill. If that is the case, then giving judges the final authority, not politicians, would go some way towards restoring trust.
• A primitive digital nation
The impression of expedient measures being rushed through without consideration of their impact on citizens is only reinforced by a new report on the progress of digital government programmes worldwide by Deloitte.
The report, The Journey to Government’s Digital Transformation, explains that one of the signifiers of an “early stage” digital government – rather than of a “developing” or “digitally maturing” one – is a primary focus on cost-reduction, rather than on citizen benefit. Out of over 1,200 respondents in government organisations across 70 countries worldwide, the British government is revealed by the Deloitte survey as being by far the most focused on cost-reduction (and therefore, implicitly, the least focused on citizen benefit).
The report even quotes Mike Bracken on the need for organisations to develop a supportive culture beneath technology change. At the time he was interviewed by Deloitte, Bracken was presumably still head of the UK’s Government Digital Service (GDS). However, he resigned in August and has since joined the Co-Operative Group, citing an institutional lack of the culture necessary in government to effect digital transformation and bring about real citizen benefit. Recently, he has set about employing several of his former colleagues (who, we can surmise, may be equally disillusioned with the UK’s digital progress).
In this context, the amended Investigatory Powers Bill should cause real alarm. An ill-informed, cost-focused, blunt instrument to fight crime – one that’s politically expedient, technologically unworkable, ignores emerging technologies, and is reliant on a disparate group of private providers to secure citizens’ data (even though there’s no legal obligation on them to encrypt it)?
Does that sound like a solution?
• It’s just not right
But there’s another dimension to this proposal that we cannot ignore: this is simply a scheme that has no place in any society that values free association, freedom of speech, and freedom of thought. A culture in which security services police people’s reading habits online at politicians’ behest would be a catastrophic misstep.
In a free society, no one follows citizens around bookshops or libraries noting what books they browse, read, borrow, or buy, or what opinions they read or listen to, because we know that people are not defined by their browsing or by their reading choices. Indeed, someone once said that a good library should contain something that offends everyone.
Reading something online is no different; just because a text is digitised does not somehow make it dangerous, any more than a piece of paper is inherently dangerous. We read to learn, to find out about the world, to enquire, to ask questions, to gain different perspectives. And the roots of the World Wide Web lie in the dynamic linkage of information resources to aid the spread of knowledge and education, to the benefit of us all. Click on a Google News link and you might be taken to text from anywhere in the world.
The problem with blanket surveillance is that there is an implicit presumption of guilt: that Bad People are doing Bad Things, and that this simple truth will be revealed by their browsing habits. But it’s nonsense.
Suppose Person X wanders into a library or bookshop, and sees a book about the history of Islamic fundamentalism, or an analysis of the rise of fascism in 1930s Europe, and decides to read a chapter. Or let’s say he leafs through a book that’s critical of the Iraq War, or of global capitalism, or of Israel’s policy towards Palestine, or of the current Conservative government.
That browsing tells us nothing about Person X, because it is without context; he is simply browsing. He might be a student; an historian; an academic; a journalist; an economist; a theologian; an atheist; a cultural theorist; an artist; a writer; an analyst; a researcher; a politician; a businessman; a school teacher; or a school pupil. Or he might simply want to read about the world, to find out more about the things he reads in the papers, to listen to other voices and other opinions. He might even vehemently disagree with everything the author says.
Now replace the word ‘book’ with ‘website’. What’s changed? Nothing.
The point is this: to browse is neither to agree, nor to condone. The moment we start building assumptions of guilt or ‘danger’ into someone’s browsing habits we are ascribing a perceived threat to specific subjects, and to the acquisition of knowledge about them. To investigate people based on their browsing habits is essentially to say, “Don’t read about X, Y, or Z”, which would be the death of reason itself.
Yet sooner or later a politician will utter the immortal words, “The innocent have nothing to fear”, but that will not be the case if these proposals become law – a law from which MPs will, extraordinarily, apparently be exempt.
If the UK adopts blanket surveillance of citizens’ browsing habits, then we are opening the door to a form of digital ‘McCarthyism’ on an unprecedented scale, and this can only damage the UK, damage our society, damage the economy, damage business, undermine trust, undermine digital programmes, and put the UK out of step with the rest of the developed world.
State surveillance is what those ‘other’ people did, those enemies of freedom, democracy, and Western values; those people who represented everything that we stood against during the Cold War. Remember them?
When did we become them?
17 September 2015: When SEO and the real world don’t match
It’s a strange fact of life that among the people who are most likely to spam you with marketing noise are search engine optimisation (SEO) specialists. Indiscriminate mass mail-outs telling you how your web presence should be optimised and finely targeted? Ironic.
A SEO expert sent me a personal message on LinkedIn the other day, explaining how his services could give one of my websites a much higher profile. The result was me disconnecting from him as soon as the message arrived, while making a mental note to avoid him for the rest of time (a productive moment for us both, I’m sure you’ll agree).
It wasn’t just that his blunt-instrument approach to salesmanship was a patronising misuse of a social platform – not a great advert for a digital ‘expert’ – it was also that the website in question (my side business hiring out a humanoid robot) is already number one on Google for some relevant search terms, and is page one for most others. None of my clients have had any trouble finding it [see BBC picture], so what exactly was this SEO salesman offering to do?
Yes, even my crossheads are meta
Of course, the truth is he hadn’t done any research before contacting me and was (I assume) just blitzing everyone on LinkedIn who looked like they might have a budget. But his self-defeating strategy did set me thinking. SEO has long fascinated me, because I’ve heard more noise about it than just about any other aspect of the digital world.
So I checked my robot’s website against a web statistics benchmarking and analysis tool, which told me in its machine-generated, coldly logical terms that it was very poorly optimised for SEO indeed, with a score of just 13 per cent. This, remember, was an automated report on a website that’s number one for some search terms, and page one for most others. Fancy that!, as Private Eye would say.
According to the analytics tool, my robot’s website doesn’t say “robot” enough – after all, it’s a word that’s only mentioned multiple times on every page, on a website about a robot, containing pictures of a robot. But this is precisely what happens when machine logic and dodgy algorithms are applied to data resources – websites – that are designed to be read by human beings.
More gratuitous self-promotion
I shared an article recently, When SEO attacks!, about how slavish adherence to some accepted SEO principles is now changing the very nature of information itself, to the extent that, on some websites, perfectly good, accurate text is being rewritten, clickbait keywords are being endlessly repeated, and facts are being altered to meet arcane SEO guidelines – not to mention some misguided editorial policies.
Put another way, information is being made more friendly to machines and demonstrably less reliable, useful, and interesting for humans, turning it into a kind of linguistic grey goo. In fact, it’s worse than that: human beings are starting to think more and more like machines, in their attempts to second-guess the unknown workings of a hidden, and constantly evolving, set of proprietary search engine algorithms. (You can read more about machine-like human behaviour here.)
This probably has a similar success rate to playing piñata in the dark, in a football field, using a stick made out of balsa wood. And rarely have so many poor decisions been made by so many people based on such little information, to such flock-based and self-defeating ends (read more noise that’s just like other noise!).
To expand on that point, I could name several publishing houses – not clients, I hasten to add – where SEO staff outnumber the journalists on key magazines, and whose SEO rules forbid the use of certain tags and force writers to reuse the same phrases as often as possible, even if it means changing the facts of a story in the quest for clicks. It’s a commonplace practice. And yet – intriguingly – their stories almost never rate highly on Google. Fancy that!
So what does it all mean? In a calm moment of scientific rigour I phoned a web developer friend, whose strictly standards-based sites all seem highly visible to search engines and are enjoyed by the human beings they’re designed for (the mix we all aspire to, let’s face it).
“SEO?” he said, sounding audibly irritated, “it’s all bollocks.”
Granted, my friend was in a bad mood, but there’s a serious underlying point: if you’re in the information-access business, as Google is – the business of making useful information more easily available to human beings so that you can help them, while also learning more about them and selling that data to advertisers – would you show preference to sites that have been ‘SEOed’ into meaningless clickbait to make them more attractive to your service? Or would you give greater weight to stories that are genuine, accurate, and stick to the point? Go on: have a guess.
Now, of course it’s possible to make websites more visible to search engines by being clear and precise, rather than woolly and obscure. So I’ll take a leaf out of my SEO salesman’s book by giving you some in-depth SEO consultancy of my own, even though you haven’t asked for it:-
Build a consistently and thoroughly standards-based HTML5 site, one that’s accessible, mobile-ready, and mobile-friendly, and then use your platform to say what you mean, and mean what you say, as clearly, as accurately, and as passionately as possible, to an audience that you can define. And then describe and tag it meaningfully and helpfully so that human beings can find and read the information that they actually want. Then – if both sides agree – start a relationship with those people that’s based on respect, tact, and mutual knowledge, not spam. (Do credit the source if you cut and paste this paragraph. Thanks.)
There you go. Just leave the cash in used twenties in the usual place and we’ll say no more about it.
7 September 2015: Rise of the ‘vempire’. Why we need personal APIs
Last month I sat in on a workshop in which a major company imagined what the future will be like. Each of us was asked to tell the room what we will be doing in ten years’ time. I explained that in 2025 a search company has used all of the personal data that’s spread across the internet about me to patent the concept ‘Chris Middleton’, and, as a result, I am now a person of no fixed identity languishing in prison for breach of copyright.
It says much about our age that this didn’t seem too far fetched.
Then there was another exercise: identifying future customers’ needs and ‘pain points’. Company employees were asked to put a tick against whichever items on a list seemed most important. Nearly everyone ticked ‘trusting our motives’ – good news, I’m sure you’ll agree – but only one person ticked ‘delivering against that trust’. Me. I ticked it to force them to question why no one else had.
Ironically, all of this happened on the same day that news broke of Spotify’s new ‘privacy’ policy. That the music-streaming provider has joined the ranks of companies that stop just short of demanding your front door keys and your car in return for the right to pay for their services should come as little surprise. “With your permission, we may collect information stored on your mobile device, such as contacts, photos, or media files …” it said. Staggering. (I love the word ‘collect’. It’s what theft becomes if you tell people you’re doing it.)
It must be obvious now that companies’ privacy policies are their mission statements, and the fact that most people ignore them and click ‘Agree’ is their own stupid fault. Most people don’t read the small print: they actively choose to be ignorant of Ts & Cs rather than to inform themselves of the facts. Worse, they hope that someone else will warn them of any dangers via social media: a form of flocking behaviour that cedes leadership and personal data security to strangers. Hardly the sign of a digitally empowered society.
Fortunately for the flock, Spotify’s policy ‘change’ – revelation is a better word – provoked an outcry and an apology from the CEO. But it hasn’t been abandoned, only tweaked and restated, just as Uber’s withdrawal of its UberPop app in Paris should really be seen as the company pulling into a parking space and leaving the engine running.
So why is this happening?
As Channel 4 News economics editor Paul Mason noted in a recent Guardian blog, the claimed rationale behind Spotify’s and any other wholesale data-grabbing exercise is to make ‘the user experience’ better. But, in fact, it is invariably used to target advertising and messaging at customers instead: a feedback loop of endless aggregate advantage to the provider and their partners, not to the customer.
Mason adds that we are witnessing the emergence of ‘cognitive capitalism’, a term coined in 2012 by economist Yann Moulier-Boutang in his book of the same name. Boutang proposed that, far from living in a flat, networked society in which we all own and control the means of production – a digital restatement of socialist principles – we are actually living in the opposite, a form of data-based capitalism in which owning data capital is the new land grab, the new gold rush. Spotify’s actions certainly map against the latter.
Each of us has the gold and many companies feel they can simply take it. It’s time to empower ourselves and take our data back.
I made a similar observation to Boutang’s about ten years ago, saying that in the future our data will be the de facto currency in a world in which actual money becomes less and less relevant. You could argue that this emerging future is the real reason behind proposals such as the Snooper’s Charter. The government wishes to create the Data Bank of England, in effect, and is using national security as a smokescreen for doing so: the only legal means of overriding human rights legislation. In the meantime, ‘human barcodes’ are not the stuff of fiction: the US has long been flirting with them.
All of the data assets that companies such as Spotify turn into money and noise are being crowdsourced from the general public, thanks to people tagging their friends’ images and sharing their contact details without first seeking their active consent.
In other words, everyone around you is turning you into a data asset that a third party can sell for money. Are any of your friends Spotify users? Then Spotify has your data, possibly even your photos and media files. It’s that simple. When did you agree to this? You didn’t, because nobody asked. That’s the network effect.
Now, an interesting observation about the digital world in its current form is that sharing anonymous, open data sets tends to create ‘signal’ – projects that help improve society, the environment, sustainable services, smart cities, and so forth – whereas Personally Identifiable Information (PII) invariably creates noise, in the form of advertising and other information that people don’t want or need. This leads to a truly fascinating conclusion: being anonymous creates utilitarian benefits for society as a whole.
Few of us invest our PII in improving society’s or humanity’s collective future, mainly because we lack a platform for doing so. But we’re happy to simply give it away in return for noise. What we need is a platform that empowers us only to invest our data in programmes that we agree with, and which blocks its use anywhere else – even if someone shares our data without our consent. We need a means of automating ‘I agree’ or ‘I don’t agree’ and wrapping our own terms and conditions around our personal data, like a Creative Commons licence that refers to the individual, not just a media file.
In short: you should be able to set your own Ts & Cs for any data that describes you, and embed those Ts & Cs in the data itself.
But back to Boutang’s observations. The truth is we are living in neither a digital capitalist nor a digital socialist future just yet. We are poised between the two, but nearing the point where these two different viewpoints will collide in a global conflict. Let’s call it the First World Data War. Forget religion, this is the real Great War of our age. There will be bloodshed, both figuratively and literally.
Mason suggests that all of the companies that are stockpiling our data and turning it into money (for themselves) and noise (for us) are building their “castles on sand”, because their users will soon turn against them. I don’t think so: it’s gone on for too long and we have all been complicit in their actions. The data remains even if we abandon a platform or an app, and therefore so does the information asset that can be monetised by a third party. (The phrase ‘digital footprint’ is fast being replaced by ‘digital tattoo’ for a reason.)
Questions to be answered
So there are two key questions to ask. The first is: Why are we complicit?
That’s easy. Collectively, the human race has been a predatory group of pleasure-seeking apes for a lot longer than it has been a cultured and sophisticated society, and a cynic might observe that all the accumulated centuries of deep knowledge since The Enlightenment are fast being abandoned in pursuit of surface, clickbait, and cat videos, thanks to our quest for a quick, evanescent hit of pleasure.
In short, we are genetically hardwired to grab easy options and free stuff. But there’s a problem: all the free stuff that we used to get in return for giving away our data was (a) never free to begin with (we paid for it with our data and our time), and (b) is now being replaced by paid-for apps and premium services. Free stuff was only ever an enticement to give away our stuff.
Not only that, but many of the people who make the stuff – writers, photographers, musicians, etc – are no longer getting paid; instead all the money is going to tier after tier of middlemen, principally advertisers. Spotify and US rival Pandora are among the worst offenders: the amount they pay artists per stream is pitiful. We need to coin a new word for this type of business. I suggest ‘vempire‘.
Our overall behaviour online suggests that, collectively, the network supports and encourages a digitally socialist viewpoint: sharing, collective ownership and control of the means of production, and so on. But counter-intuitively, our desire to have lots of free stuff has created powerful data landowners and landlords, to whom we are quite happy to cede power over everything that identifies us. Hence my joke about being imprisoned for breach of copyright over what constitutes ‘me’.
All of which brings us to question 2: what can we do about it?
Speaking at IPEXPO in London last year, the Web’s prime mover Sir Tim Berners-Lee said that members of the public must start to regard their own data as a personal asset, and take back control over it, putting themselves in a position to bargain with organisations and demand more in return for sharing it.
That’s all well and good, but short of simply kicking up a fuss, how can we do that after 20 years of ecommerce, 15 years of mass mobility, and 10 years of social sharing? Whatever we do, the cat remains out of the bag.
One possible means to take back power from the ‘data landlords’ and deploy the ‘quantified self’ to greater social and personal advantage is an emerging concept: the personal API, a term coined by Eric Friedman when he was based at Foursquare in New York. One of the founders of Foursquare, Naveen Selvadourai, has been experimenting with just that, as he explains in a blog post.
Ironic, isn’t it, as Foursquare is based on the principle of learning your likes and preferences.
Creating a personal API platform and standard could be a fascinating route ahead for consumers in the digital world. An equivalent of the Creative Commons licensing scheme, it might allow people to share as much or as little personal data as they wish and, better still, decide what uses that data might be put to – and what uses it may not.
Placing your own data behind a personal API might give you the power to force any company, organisation, or individual to engage with you on your terms, giving greater power back to the user to create ethical ‘investments’ and withdraw support from any programme that does not benefit society as a whole, or match your own belief systems.
It’s just a thought. I’m not saying it’s flawless or the greatest idea in the world. For example, might it encourage some people to be greedy and simply flog their data to the highest commercial bidder? Of course, but at least they could do that on their own terms, fully informed of the purpose of the data’s usage, and with the companies concerned giving them something other than noise in return.
Equally, might it be insecure, and might the government demand access to it? No doubt, but at least it changes the conversation and slams the door on the Spotifys of this world who do little more than tell you they’re picking your pockets.
That’s surely a conversation worth having.
An earlier version of this article first appeared on Diginomica.
1 September 2015: My robot becomes a TV star
On Sunday I had what you might call ‘a very Chris day’: accompanying a humanoid robot to a BBC TV studio, hiding behind a screen, and trying to suppress the robot’s natural urge to sing Michael Jackson songs when left unattended. It was a live show, so a sudden burst of ‘Thriller’ and frenzied robo-dancing would have gone down badly during the interview with Terry Waite.
The robot, of course, was Stanley Qubit, the grumpy-but-cute NAO-25 robot who came into my life two years ago on a plane from Mexico: a bit like Paddington Bear updated for the 21st Century, now I think of it, except it actually happened. Stanley has a hat just like Paddington’s (he wears it around the house), but he doesn’t eat marmalade. Instead, he has a fondness for red balls, which he is obsessed by. As are most robots, incidentally. How’s that for a fact of the day? (Not my red balls, you understand. I mean rubber ones.)
So if the robo-pocalypse does happen, just wave a red ball in the marauding Terminator’s face: it will be as helpless as a puppy. Robots also have trouble with rugs and carpets, by the way, so a sturdy shag pile and a box of balls will save you from Armageddon. Just remember: you read it here first.
This year, Stanley has appeared at conferences, taken two school assemblies, and even been approached by an artists’ agency who want to represent him. I should add that Stanley is the least grumpy member of a household that includes Doodle the deranged cockapoo, and another noted technology journalist (who is very grumpy nearly all of the time).
Anyway, back to the TV show. Stanley had been invited to appear on Sunday Morning Live on BBC1 – aka the ‘Andrew Marr is on holiday’ show – taking a sort of Bill Turnbull role alongside host Sian Williams. (It’s just been announced that Bill is leaving BBC Breakfast. Stanley is cuter than Bill, cheaper, and is a much better dancer. He can also read the news without an autocue. I’m just saying…)
It was a surreal experience: outside the studio window were a troupe of scantily clad, gyrating dancers with steel drums, while inside were lovely Ana Matronic out of the Scissor Sisters, equally lovely Bonnie Greer, a man who has written a book about rape, and lots of angry people discussing immigration. Somehow Sian Williams – effortless and impressive, I must say – knitted the whole thing together live on air, possibly suppressing a strong urge to scream.
Afterwards, Ms Matronic came over and said hello, and was kind enough to give me a copy of her new book, Robot Takeover (Octopus Books, published this week). With the Scissor Sisters currently on hiatus, she is recording a solo album and developing new careers as an author and broadcaster.
But really she just wanted to meet Stanley… and watch him take over the world at first hand. This happens a lot. I’m just staff now: a little glimpse into the future for us all. If he develops a music career, he may become the first pop star to throw himself out of a window. (OK, there was Robert Wyatt. But that was an accident.)
21 August 2015: Businesses! Stop killing the language
There’s a moment of tension that comes when you’re a journalist sitting in a conference hall – at a C-level IT event, for example. It’s like a full-body cringe, accompanied by a strong urge to scream, or hurl a free branded stress ball at someone’s forehead.
It happens when the CEO, the CMO, the CIO, or some other all C-ing eye with a chequebook the size of an airport starts tossing words aside as if they are mere playthings rather than a means to communicate ideas, meaning, and clarity. “We’re expanding across all geographies,” he says, leaning back like Jimmy Page, “and we’re getting real traction in the Asia geography.” (Applause.)
No, you’re not. ‘Geography’ is the study of the earth, and its landscapes and environments. You can’t expand into it, you can’t (kill me now) transition it or (ditto) leverage your paradigms at it. And it doesn’t have a plural. It means ‘writing about the earth’. It comes from the Greek… oh never mind. The point is, it’s not a synonym for ‘country’, ‘region’, or ‘continent’. And what’s wrong with those perfectly good words anyway? For anyone who loves words and values clarity, these moments are a FRIGGING NIGHTMARE.
And then there are all those acronyms, designed to declutter the language. Except they don’t: they obfuscate and confuse. Take MDM. Mobile Device Management? Why, yes. But also (off the top of my head): Mobile Data Management; Master Data Management; Machine Debug Manager; Multi-service Data Manager; Multiplexer/Demultiplexer; Modular Digital Multi-tracks; Micro-Doppler Modulation; Mobility Diagnostic Module; and, er, Mechanically Deboned Meat and Melodic Death Metal. So: you’re in the MDM market? Well good for you! I’ll have a pound of sausages and a gloomy riff, please.
Anyway. PLEASE, business leaders, stop undermining the English language, unless your intention is to present yourself as someone who knows the monetary value of everything, but the meaning of nothing. A little bit of clarity goes a long way. You don’t need to bend the language to your will, as well as entire continents. Speaking clearly is fine. Just use the correct words; no one will think you’re a simpleton. They might even listen to you.
Got that? Good. Carry on. Roll out those solutions.
5 August 2015: Selfies
I’ve the good fortune to have a beautiful view from my window. So far, so me-me-me. But it was fluke and happenstance – a chance conversation – that led me to this spot some years ago, to the epicentre of a town that attracts hundreds of thousands of visitors every year.
But why am I telling you this? Because living here has taught me to look. At the view, of course, but also at the people who are looking at the view. And something has begun to worry me.
For years I watched people wander past my window, stop, take a deep breath, gaze at the view, the sky, the people, the colours, and the landscape, and then – almost as an afterthought – reach for their cameras or phones to grab a few pictures. And that was just as it should be.
But two years ago all of that changed. Now people rush to the front, find the ideal vantage point, and take a selfie: a close-up picture of themselves.
Faced with a beautiful landscape, big skies, soaring gulls, a thousand buildings, 250,000 people with stories to tell, crashing waves, the wild and ever-changing sea, amusements, skaters, jet-skiers, surfers, swimmers, and 100 dramas at every compass point, the most fascinating thing that people can find to photograph is their own pouting faces.
Granted, I share this on a website about me, but the point is I see thousands of people every day – young and old, gay and straight, singletons, couples, families, groups – go through the same ritual: they find the ideal spot, ignore the view, the people, and everything around them, spend minutes composing themselves into selfie shapes, press the shutter, stoop, post the pictures, and then move on.
Nobody stops to take things in first hand; they only pause to photograph themselves. And if the only thing giving you perspective is a selfie stick, then something is wrong.
If they even notice the land, the sea, the sky, the buildings, the seasons, the street performers, the colours, the skaters, and the rough sleepers, then it can only be as background detail in a million self-portraits.
The most troubling example of this was a few weeks ago. Living as I do in a popular tourist spot, wedding parties often congregate just yards from my window. In January, a young bride and groom braved the weather and trooped down to the sea to have their ‘friends and family’ pictures taken just outside my door.
But as they posed for a group of 40 or more guests – the bride beautiful in a chic and elegant dress, the groom handsome and suited – all of their friends turned their cameras on themselves and took selfies, not pictures of the happy couple.
I watched for 20 minutes or more until they began drifting off towards the reception. During all that time, no one took a photo of the bride and groom as they posed on their special day, enjoying their first moments as husband and wife. Instead, the guests all held their phones aloft and took self-portraits.
So take it from someone who sees many thousands of people daily, all taking pictures – summer or winter, rain or shine, dawn or dusk: we’ve reached a point in human history at which nothing is more important than taking self-portraits.
The wedding took place just days after the Charlie Hebdo atrocities in Paris – where, within 24 hours, people were taking selfies outside the murdered cartoonists’ offices, and outside the shop where the siege took place. Not documenting what they saw, not paying their respects, but seeing bloody history as mere wallpaper for an endless stream of self portraits.
Today, the Photoshopped fake of a man standing on the roof of the World Trade Center as a plane swoops in behind him seems prescient, rather than the grim joke that it was in 2001. One day soon, someone will take a selfie rather than run for their life.
What’s happened to us? What changed and made us see the world in dysmorphic widescreen? What made our self-obsession the only subject worth documenting, our autobiographies the only stories worth telling?
Turn your cameras around and document the world. But first do something just as important: experience the world first hand for yourselves – don’t watch the video later to find out what you missed.
And why stop there? Go the whole hog. Switch your phone off. Watch gigs, don’t record them. Walk somewhere. Feel the pebbles underfoot and the waves crashing on the shore. Get on a train and visit a strange town. Talk to people you don’t know. Help someone. Learn new skills and teach them to others. Devote your time to the community. Go outside your comfort zone. Say no. Say yes. Buy analog. Write letters. Read books. Write books. Support local businesses and artists.
Because if the most fascinating person in your own life is you, then you’re living it wrong. And by way of rank hypocrisy, welcome to my website, all about me-me-me, and to my blog.
29 July 2015: Four myths about publishing
Publishing is chock-full of myths: things that most people believe, but which simply aren’t true. Here are my top four:
Print is dead. Everything’s digital.
Not true. Like vinyl, print is booming. As technologies always do when they’re substituted, it’s moved up market into high-value niches, where it’s loved by aficionados of all ages who appreciate quality, craft, exclusivity, and depth. In a common(s) market, rarity’s value increases. Digital is great, but not all information should be valued by the speed at which it moves.
Little White Lies, Monocle, Frankie, Hero, Frame, Sleek, Disegno, Dansk, Surface, Another Escape, Optology, Hungry Eye, Huck, Road Book, London Boat, Fantastic Man, Hunger, Exit, Art Review, System, Milk, Flaunt, Prestage, Lula, Printed Pages, Dapper Dan, The Gentlewoman, our own Strategist, and hundreds more, are just some of the recent titles that represent a new golden age for the printed page.
Magazines have to say whatever advertisers want them to.
Not true. What advertisers really want is to be associated with quality, thought leadership and independence, because they probably share those values. They’ll chance their arm, but no one respects magazines that print anything for money – least of all their ‘readers’, who throw them in the bin. Stand for something; fall for nothing.
Business magazines have to print the spin.
Why? Be honest with your clients and customers. You’ll be surprised how much they like it and your advertisers and partners will too. Professionals and aficionados see all sorts of magazines, but they read the ones that have voices, opinions, and personalities, and which write about the real world. (Think about what you would choose to read. Why are your clients, customers, and communities any different?)
Respecting your audience means assuming they’re intelligent. Instead of trying to shift product at every opportunity, be a thought leader about what your customers and clients really care about. They will come to you. Confidence is a preference for the habitual voyeur of… shelf life.
You have to chase clicks and popular stories.
Not true. Clickbait and cats are what Facebook is for. Your visitor numbers may spike when you jump on a meme, but keep doing it and in the long run your site will become a noisy, unfocused portal that no longer has a purpose, a core readership, or any values. Just look at the Huffington Post. That’s not much of a story to tell to your customers and partners. Leadership is about clarity and focus. Expressing opinions about everything in the news just tells your audience that you’re desperate for attention. ‘Meme’ is ‘me 2’. In a noisy room, whisper “Follow me” in one person’s ear.
18 July 2015: Record Store Day
On the face of it, Record Store Day 2015 (RSD15, 18 April) was another success for the annual event that celebrates indie retailers and the resurgence of vinyl. Music fans queued round the block at local stores to buy limited edition releases produced especially for the day.
It began in the US in 2007, when 700 independent stores came together to celebrate a unique culture that is driven by music fans: the buyers and sellers who’ve kept a passion for music alive in the face of an often passion-free mass market. The UK followed-suit, and in just eight years RSD has become a powerful movement.
But is the spirit of Record Store Day still alive? Well, there’s still much to celebrate. The sight of people from every age group queuing around the block to buy music from local shops should gladden the heart of everyone who supports music, indie retailers, and physical formats.
The vinyl market has been growing at 70 per cent year on year for several years running, and April 2015 saw the debut of the first official vinyl-only chart. Other analog media are enjoying a similar renaissance, offering a true alternative to digital streams and downloads. (Check out this Pinterest board of new print magazines, for example.)
And it’s not just middle-aged men, die-hard analog fans, or hi-fi buffs who are buying LPs; teenagers and students are too. High street clothing store Urban Outfitters now stocks Crosley turntables and classic vinyl reissues.
Fans certainly see Record Store Day as an opportunity to celebrate their passion for music with live music, parties, and more. For example, the Union Music Store in Lewes, Sussex, hosted an afternoon of free live acoustic performances, a DJ, and drinks. The Union has started its own record label, too: a good example of how creative indies can still carve out a niche for themselves by doing the things that the likes of HMV either can’t, or have forgotten how to.
Another Sussex store, Resident in Brighton, has become the go-to place for tickets for both local bands and touring acts. Being a focal point for a community is how to make bricks-and-mortar retail work in 2015. The shop’s love of music is evident as soon as you walk in – and they count a well-known former music writer among their staff.
Popular rival Cult Hero in Brighton, which sells art house DVDs alongside an expanding list of new vinyl (and a dwindling list of CDs), had a ‘one out, one in’ door policy on the day to cope with the morning queues.
So: much to cheer. But there was also a note of discord…
By 9.05am on the day – just five minutes after the doors opened nationwide – the first RSD15 releases were already appearing on eBay, in a couple of cases with 1,000 per cent markups. Those sellers must have photographed the discs in store and posted the listings instantly by phone – on 24-hour auctions to capitalise on the spike in fans’ interest.
In fairness, not everyone has a local music shop to support and so eBay may be their only option – although most indie music stores have excellent websites. But it’s clear that many of the early risers on the day weren’t music fans at all, but people who’d queued from sunrise purely so that they could snap up a collector’s item and flog it for profit literally seconds later. The after-market for previous RSD releases is strong: some rarities already fetch hundreds of pounds at auction.
Of course, none of this should matter to the retailers that RSD is designed to help, as long as their tills are ringing. But while RSD is a great day for the shops and represents a massive uptick in income, not all retailers seem as happy as they once were, despite the surge in sales and the new long-term relationships that some of those sales may create.
So why are some now grumbling? In some cases, bands are keen to make rare recordings available to hardcore fans, but in others, major labels are cashing in on an event that is fast becoming a mini-Christmas for the indie market – for example, with marked-up picture discs that have been available many times before. “The list is nowhere near as good as last year,” one shopkeeper told me.
And it’s not just the labels who are maximising their rewards from a day that’s supposed to be about indie shops and music fans; it’s also the distributors. A local seller (who remains anonymous) told me that some labels and disties have begun charging shops inflated “crazy prices” for RSD discs, ramping up their own profits and screwing indie shops out of the very margins that they need to survive. And this on a day that’s supposed to be about independent retail, and not about the big fish or cynical opportunism.
But long live Record Store Day. And long live independent music, local shops, communities who care, and content that matters on formats that people treasure and keep. Not everything should be valued by the speed at which it moves. After all, culture is the stuff that sticks around.
‘Enthusiasts’ should never be a dirty word.
All text © Chris Middleton 2015