Why increased surveillance will put one million children at risk
Chris Middleton explains why the government’s surveillance plans may criminalise thousands of innocent people – including children.
OPINION The British government’s surveillance Bill forces telecommunications providers and ISPs to retain all customers’ phone and internet data for up to a year, while weakening encryption protocols and setting technology providers against their own customers’ interests.
So, beyond the questions of whether such a scheme is acceptable, ethical, or viable – which are explored below – a key question must be: are ISPs and telcos ready for it, given that they are front and centre of the government’s plans? The 2015 hack of TalkTalk, in which unencrypted data from 157,000 customer accounts was stolen, suggests that the answer may be no.
The hack of TalkTalk’s website demonstrated two things. First, that customer records are irresistible to criminals in a world in which private data is the de facto currency. And second, that there is no legal obligation on internet providers or telcos to encrypt them.
The company’s CEO, Dido Harding, went as far as claiming the Data Protection Act as her defence against criticisms of account data being stored ‘unsalted’. The Act fails to make data encryption a legal requirement, allowing providers to hide behind its vaguely worded terms.
This all-too-public failure of a supplier to secure its own customer data should serve as a warning to Whitehall: it will be open season on the internet records of UK businesses and private citizens, because their allure will be irresistible to criminals. ISPs, telcos, mobile providers, and others, will be expected to secure that data from day one, but as dozens of separate companies, their response will be piecemeal rather than a co-ordinated national strategy.
By leaving it up to the market to decide how best to handle a national intelligence programme, the government’s surveillance policy has the makings of being a dangerous ideological gamble with citizens’ data security.
The revised Investigatory Powers Bill may also force some technology companies to break the laws of the countries in which they are incorporated, as Apple CEO Tim Cook explained to the then Home Secretary, Theresa May, in 2015. O2, Vodafone, EE, and 3 were among others to express their disquiet.
So the government ignores an important fact by pushing through its plans: it isn’t just declaring war on terror; it’s also declaring war on IT suppliers’ business models. That sits badly next to the government’s demands for supplier cooperation.
Inevitably, the Bill will drive more citizens towards end-to-end encryption, and towards privacy-shielding apps or devices, while reducing citizens’ trust in central government. Indeed, some IT suppliers have already begun urging their customers to adopt privacy tools, as this story reveals.
Increased risk to children
Away from the TalkTalk boardroom, Dido Harding is Baroness Harding of Winscombe, a member of the House of Lords who has stated her personal determination to make the Web safer for all, especially for children – if not for her own customers, it seems.
So, will the amended Bill protect children from those would exploit them? No. Any programme of national surveillance will significantly increase the risk of harm to the UK’s children, for two simple reasons.
First, children’s rights to privacy and free association are protected under Articles 13 to 17 of the UN Convention on the Rights of the Child. This means that the British government may have no choice but to seek a national security exemption against the Convention – the most widely upheld piece of human rights legislation in history – because any national data-mining exercise would gather data about children’s internet activities as well as those of adults.
And second, because by far the largest distributors of explicit images of minors are minors themselves. Several reports have suggested that up to 25 per cent of all teenagers engage in ‘sexting’ – surely an underestimate – with 80 per cent of those involved being under the age of 18.
So let’s do the maths. There are five million teenagers in the UK, according to government statistics. If up to 25 per cent of them are sexting, as multiple surveys suggest, that’s 1.25 million young people. If 80 per cent of those involved are under 18, then one million children could be at risk of prosecution under the UK’s new powers.
Scaremongering? No. In 2018, the first mass-prosecution of teenagers for distributing explicit material took place in Denmark, with over 1,000 youths charged for sharing the same video on social media.
Smaller-scale prosecutions have taken place in both the UK and the US, where teenagers – including some aged over 16 in legal, consenting relationships – have been convicted of sending pictures of themselves to their partners. Under a strict interpretation of UK law, any minor who sexts is no different to the abusers the Bill is designed to catch.
In the US, a Carolina teenager was recently found to be both the perpetrator and the victim of this crime, for having an explicit image of himself on his own mobile. There’s no evidence that he sent the image to anyone, but he’s now a convicted criminal.
What we, as adults, think about the morality of teenagers’ actions is irrelevant, as even the UK’s National Crime Agency (NCA) admits. In a recent interview, Zoe Hilton, head of safeguarding at the NCA’s CEOP Command said, “With smartphones and tablets, and new apps emerging all the time, this behaviour is becoming quite normal for teenagers.”
The technical challenges
Then there are the technology impacts of a national surveillance programme, one of which is that an internet that facilitates mass surveillance is an internet that is less secure and so makes cybercrime easier to commit.
Another is the challenge of working out who is and isn’t a cybercrime suspect. Let’s suppose that an IP address is associated with a criminal act. IP addresses are often routinely reassigned by ISPs, and so don’t reliably identify people, or even devices. IP addresses can be bought and sold – as was evidenced by the government recently selling off unused IP addresses to, among others, Saudi Arabia.
They don’t even pinpoint a user’s geographical location. IP addresses can be cloaked, and can be rented off the shelf via proxy servers that allow a device to appear to be anywhere in the world from California to Korea. Meanwhile, computers can be absorbed into malicious botnets without the owner having any idea it is happening.
The dangers of investigating people based on IP addresses are amply demonstrated by this December 2017 news report in The Guardian.
Just as significantly, there will soon be millions of new IP addresses, thanks to the Internet of Things (IoT): all those smart devices that will be coming online over the next few years under the extended addressing of IPv6. A tsunami of data, which the police and security services almost certainly lack the resources to deal with.
The technology challenges inherent in any meaningful monitoring and data-mining of internet records are so vast and complex as to make the whole scheme nonsensical. Indeed, several providers have said that what the government wants may be technically impossible, not to mention vastly more expensive than Whitehall’s estimates.
On the face of it, national surveillance has the potential to be another Whitehall IT disaster, in the wake of the NHS National Programme for IT, Universal Credit, and other botched, overdue, and over-budget schemes.
The business angle
Beyond the lack of preparedness of some ISPs and telcos, few of the above problems will have a direct impact on business, perhaps. But three things certainly will: the rise of Bring Your Own Device (BYOD) schemes and ‘shadow IT’ in business; the growing grey area between ‘business’ and ‘private’ use of technology, and any lack of trust in the UK’s digital economy and its technology providers.
The first two can be taken together. Increasingly, employees use their own devices for business, and informally use their own cloud applications and storage facilities at work, too. This shifts their private internet usage into the corporate realm. Many of those employees also work from home or wherever there is free wifi. This has the reverse effect: it shifts corporate internet use into public and private spaces.
The enterprise use of encrypted communications, proxy servers, and other tools is central to the successful running of the digital economy. These technologies are core to the UK’s economic health.
But many private individuals are also ‘companies’: self-employed people, freelancers, and so on – anyone who uses limited company status for business. Will their use of encrypted communications be accepted by the government, or does it only apply to large enterprises? If so, why should big business be allowed to encrypt communications, but not anyone else? Are corporations inherently more trustworthy than British citizens?
And what about all those flexible and remote workers who need secure access to enterprise systems at home? Organisations have a legal obligation to secure their own data, or face severe fines under GDPR. This complex set of problems is set out in this separate report. This whole area has so many pitfalls that, again, the Bill may prove to be completely unworkable.
Which brings us to trust. For our digital economy to succeed, citizens need to trust that digitised government services have their best interests at heart. Customers need to trust that their service providers can protect their data. And private companies need to trust that the UK is a safe place to do business. In each of these areas, the Bill is a failure.
The impression is that politicians and inexpert officials have rushed surveillance plans through, rewriting and amending them on the fly, with little consideration of the real-world consequences that may follow. And they are doing so against the advice of most technology providers, let alone civil liberties campaigners.
There’s also the suspicion that because security services’ covert surveillance of UK citizens was probably illegal, the programme has been forced above ground in the form of this Bill, using the blanket opt-out of national security, which exempts the Bill from human rights legislation. If that is the case, then giving judges the final authority, not politicians, would go some way towards restoring trust.
It’s just not right
But there’s another dimension to the legislation that a mature democracy cannot ignore: this is simply a scheme that has no place in a society that values free association, freedom of speech, and freedom of thought. In a free society, no one follows citizens around bookshops or libraries noting what books they browse, borrow, or buy, because we know that people are not defined by their browsing or reading choices. Indeed, someone once said that a good library should contain something that offends everyone.
Reading something online is no different; just because a text is digitised does not make it dangerous, any more than a piece of paper is inherently dangerous. We read to learn, to find out about the world, to enquire, to ask questions, to gain different perspectives. And the roots of the World Wide Web lie in the dynamic linkage of information resources. Click on a Google News link and you might be taken to texts from anywhere in the world.
The problem with blanket surveillance is that there is an implicit presumption of guilt: that the simple truth of bad people’s existence will be revealed by their context-free browsing habits. But it’s nonsense.
Suppose Person X wanders into a library or bookshop, and sees a book about the history of Islamic fundamentalism, or an analysis of the rise of fascism in 1930s Europe, and decides to read a chapter. Or let’s say he leafs through a book that’s critical of the Iraq War, or of global capitalism, or of Israel’s policy towards Palestine.
That browsing tells us nothing about Person X, because it is without context; we have no knowledge of his intent. He might be a student, an historian, a journalist, an economist, a theologian, an atheist, a cultural theorist, an artist, a writer, an analyst, a researcher, a politician, a school teacher, or even a pupil. Or he might simply want to read more about the things he sees in the papers, to listen to other voices and other opinions. He might vehemently disagree with everything the author says.
Now replace the word ‘book’ with ‘website’. What’s changed? Nothing.
The point is this: to browse is neither to agree, nor to condone. The moment we start building assumptions of guilt or danger into someone’s reading habits, we ascribe a level of threat to specific subjects and to the acquisition of knowledge about them. In the long run, to investigate people based on their browsing habits is to say, “Don’t read about X, Y, or Z”, which would be the death of reason itself. And these investigations may be automated and infused with AI – systems that may simply reinforce the programmers’ own biases.
But ultimately, it boils down to this: state surveillance used to be what those ‘other’ people did, those people who represented everything that we stood against during the Cold War. Remember them? When did we become them?
© Chris Middleton 2015 and 2016