Dutch data privacy campaigner Marleen Stikker had a revelation of the ‘Big Brother’ potential of digital technologies in 1994, just ten years after the iconic date of Orwell’s 1984. As a founder of Digital City – Europe’s first virtual community – Stikker was offered a demonstration of the dark side of what were then emerging technologies.
Few people walking around a modern city centre minding their own business will be aware that sophisticated surveillance technologies are monitoring their every move. These same individuals will have become belatedly aware of the abuses of social media companies, following the Facebook and Cambridge Analytica scandal. But what they won’t know is how it’s not even necessary to be online for your data to be tracked and recorded by invisible agencies.
“In today’s ‘smart cities’, there is continuous geo-surveillance of citizens. The data is then used to target people with various forms of governance, or advertising,” says Professor Rob Kitchin, who wrote the ‘Getting Smarter about Smart Cities’ report for the Irish Government. “The idea of privacy is to selectively reveal yourself, but in a smart city you’re always on show. Even if people think they have nothing to hide, they don’t want to be constantly monitored. There’s a good reason kids want two Facebook accounts if their parents are on social media.”
Professor Kitchin has a dual perspective on smart city technologies which makes him an insightful analyst. He leads the team responsible for the ‘Dublin dashboard’, a suite of smart city controls in Ireland’s capital city, but he also critiques the dangers of the data gathering in his role as an academic. He’s careful to apply his ethical principles to his work on the Dublin dashboard. “We create tools that allow Dublin city to manage their data, but we make sure it’s aggregated from thousands of people so no individuals can be identified,” he says.
The dangers of privacy abuses have increased exponentially in recent years and Professor Kitchin says we are now living in an age of “pervasive and ubiquitous computing”. Both New York and Chicago have more than 24,000 CCTV cameras and several British police forces have rolled out CCTV facial recognition programmes. In addition, the UK has 8,300 ANPR (Automatic Number Plate Recognition) cameras capturing 30 million number plates each day. Smartphones continuously connect to cell masts and Wi-Fi spots, or send out GPS coordinates. The GPS fitted in new vehicles allows on-board computers to track location, movement and speed. All this personal data is gathered by private companies and government agencies and can easily be shared with third parties.
‘Accidental’ smart cities
To be fair to city planners, one of the reasons they have neglected privacy concerns is smart cities technologies have sneaked up on them. “They’ve become smart cities almost by ‘accident’. Cities have just accreted technology over time, including governance systems, control rooms, sensors and performance management tools. As a result, ethics haven’t been top of the agenda and smart cities have become overly paternalistic. They’ve been acting on behalf of citizens without actually asking them what they want. There’s very little citizen participation,” he says
In a number of major cities, sensor networks are deployed across street infrastructure to track phone identifiers, such as MAC addresses. In London, for example, the bin company Renew installed sensors on 200 bins. In one week in 2014 they tracked four million devices from bin to bin, mining data about which shops people visited, how often and how long they stayed. They used the information to show targeted adverts on LCD screens installed on the bins. “Passers-by could not opt out of the tracking unless they took a different route, so there was no consent to the use of the data,” says Professor Kitchin.
Similar technology track shoppers in malls, he says. The data is pooled with CCTV footage that captures demographic information. Some major cities have installed a ‘mesh’ of free public Wi-Fi, which tracks the IDs of all devices accessing the network. “Private companies and agencies possess a vast quantity of highly detailed behavioural data from which a lot of other insights can be inferred. It’s become an acute problem in recent years as the data has become much more granular,” says Professor Kitchin.
China’s ‘sincerity score’
An extreme example of manipulation is the Chinese Government’s ‘social credit’ system. The policy allows them to monitor every citizen’s consumer behaviour, as well as conduct on social networks and real-world interactions, such as receiving speeding tickets, or even arguing with neighbours. An algorithm integrates the data to produce a ‘sincerity’ score for every Chinese citizen. The system passes judgment without any recourse to appeal. Under such an Orwellian scheme, political dissent could be disastrous.
In India, there has been controversy about how Prime Minister Narendra Modi’s Smart Cities Mission has uprooted tens of thousands of people. Modi plans to modernise 100 Indian cities by 2020 with smart city technologies, but critics say his US$7.5 billion plan has triggered mass evictions from slums and informal settlements in cities such as Indore, Bhubaneswar, Delhi and Kochi. “Indian smart cities are not aimed at sorting out the slums and poverty. They aim is to provide housing and services to the emerging middle-class. You can make a case that smart cities are for the greater social good of some people, but certainly not everybody,” says Professor Kitchin.
In the US, the smart city concept of ‘predictive policing’ has sharply divided public opinion in Chicago, and other major cities. The idea is for officers to focus on an area that’s more likely to see crime, according to police algorithms. The PredPol software from one Californian company turns the software predictions into 500 feet by 500 feet red squares on a Google map. In one case, in Chicago, a 17-year-old was identified as a suspect via a Facebook picture from three years earlier, which led to his wrongful arrest and imprisonment for a crime he hadn’t committed.
“It undermines the fourth amendment right that you should only target someone if you have warranted suspicion rather than algorithmic thinking,” says Professor Kitchin. “Another critique of predictive policing is that it’s based on social sorting, which often in the US ends up as racial profiling. This can strain relations between minorities and the police. Because the analysis is all done in a ‘black box’, there’s no transparency about how conclusions are reached, which is another cause of resentment.”
Military surveillance city
The city of Camden, in Philadelphia, has taken smart city policing to unprecedented levels. In Rolling Stone magazine, the reporter Matt Taibbi described how 121 cameras covered virtually every inch of sidewalk and police possessed a giant 30-foot mobile crane called SkyPatrol they could park in a neighbourhood and use to throw a net over six square blocks. The article described how 35 microphones across the city could instantly locate a gunshot to a few metres. The police also placed scanners that read license plates on the back of patrol cars. “Camden is like a military surveillance city, which is one example of how smart city technologies are being used in a technocratic and top-down way,” says Professor Kitchin. “Ethics are not top of the agenda in smart cities. The state acts on behalf of citizens without asking them what they want. There’s very little citizen participation. It’s underpinned by instrumental rationality – the idea that technology can solve all the problems of the city,” he says.
Using massive amounts of data to make decisions about every aspect of city life carries major security risks, too, he says. One of the biggest vulnerabilities is around transport systems and hackers have carried out high-profile attacks. In Poland, in 2016, for example, a 14-year-old boy modified a TV remote control so he could change the track points on the tramway system on the city of Lodz. He derailed a train, injuring a dozen people. In San Francisco, also in 2016, hackers disabled 2000 machines in the computer system for the city’s entire transport network, allowing customers to travel for free. In Haifa, in Israel, a cyber-attack shut down a major artery into the city at rush hour. Investigators suggested it could have been the work of an enemy state, such as Iran.
In the European Union, the General Data Protection Regulation (GDPR), will have an effect on smart city development when it comes into effect on May 25. But it’s not yet clear how it will play out. With invisible and ubiquitous smart city technologies, it’s difficult to obtain free and informed consent. Controllers of the technology will have to make sure the data is fully anonymised and cities will require real-time portals for citizens to review their data. However, Professor Kitchin thinks there is a strong chance that organisations building smart cities will not treat the data with the levels of privacy demanded by GDPR. There will almost certainly be years of legal battles in the European Court of Justice.
There are, though, a handful of examples of good practice from cities that recognise the threats to privacy and security. Seattle, for example, has established a Privacy Advisory Committee (PAC) to assess how authorities store and use data. Though led by technology experts, the PAC also includes police, fire, lighting, transportation, information technology and law departments, as well as the public library, academics and industry leaders. Together, they establish a toolkit that protects privacy and enhances security. In Europe, Barcelona and Amsterdam have taken a similar route. “They’ve completely changed how they’re introducing the technology and how they’re using it to run the city,” says Professor Kitchin. “There’s more ‘technological sovereignty’, which means the data analysis has to serve citizens rather than exploit them, or control them. They are leading the way and the hope is that more cities will follow their approach.”