So far, the police seem unable to do much about all these explosions in the city. To what extent can digital surveillance contribute to stopping these attacks?
“Research shows that cameras have very little preventive effect. In addition, the big problem here is that where you want to film has become more and more diffuse. Twenty years ago you could simply watch a street where a criminal lived, now it’s a whole network of family and friends who are being threatened and who live all over the place. So if you’re talking about digital surveillance, I have little confidence in that.”
So what kind of surveillance do you have confidence in?
“Analogue surveillance, the human approach. Where police officers build a relationship with people in the neighbourhood, who can then see things and notify the police or even intervene. But this requires trust in the police, and that’s a major problem. In districts such as Hillesluis and Bloemhof, the level of trust in the police is much lower than the national average. If you ask the Dutch population whether they have confidence in the police or have a good relationship with them, about 70 percent will say ‘yes’ nationally whereas the figure is around 50 percent in these districts. And this affects the amount of information available to the police in these neighbourhoods, which makes it difficult to prevent these attacks.”
Marc Schuilenburg is professor of Digital Surveillance at Erasmus University Rotterdam and assistant professor of Criminology at VU University Amsterdam. He has written several books on security, such as The Securization of Society (2015).
At the same time, you see more and more cameras everywhere these days, which gives the impression that the police have more confidence in cameras than in their own district police officers with good connections in the district.
“Yes, just to be clear: cameras can be useful for detecting the perpetrators, although of course you are already too late as the explosion has already happened. But there is a persistent belief that you can prevent crime with cameras. You see an increasing emphasis on data and technology among the police. This could be called a kind of surveillance solutionism: the belief in digital technology as the leading solution to the security issue. The human factor is actually much more effective when it comes to preventing crime, but it is undermined by this emphasis on technology.”
Why does this have an undermining effect?
“By only placing your trust in digital forms of surveillance, you are delegating your responsibility to the technology, as it were. There is an accompanying risk that neighbourhood residents will look out for each other less, with a resulting drop in social cohesion and therefore mutual control. Even though social control mechanisms are particularly important in the fight against crime. In addition, the police need the trust of local residents. Trust is connected with the human side of policing. Aspects such as ethnic profiling and discriminatory algorithms in digital tools used to detect welfare fraud only damage trust.”
If you walk down a random street, let’s say in Rotterdam-Zuid, how many cameras are there?
“The number is unbelievable. It’s not only the police cameras, but also cameras in your and my phone, security cameras installed by entrepreneurs, in cars and on rubbish bins. And don’t forget the huge increase in the use of smart doorbells, which also include a security camera. You’re completely surrounded by cameras, we live in a kind of glass house.”
To what extent can the police access all of this?
“Very easily when it comes to these explosions. If the police think that the images could help to detect the perpetrators, they can demand the images both from doorbells and phones. In the explosions in Van Speijkstraat and Crooswijkseweg, this probably happened.”
There has been a lot going on in recent months about the stormy development of artificial intelligence. Which role does AI already play in digital surveillance?
“Surveillance used to be analogue: watching and being watched took place with the naked eye. Then it became digital, using cameras and facial recognition. And now we have algorithms as well. These are used to extract images from all kinds of different data sets. Algorithms and artificial intelligence are used to combine and analyse these images. The use of algorithms in detection entails all kinds of dangers, for example as we have seen with the fraud detection system SyRI (which reported ‘welfare fraud’ based on profiles, ed.), and the Dutch childcare benefits scandal. ‘Algoracism’ was very much present in these matters.”
Are these kinds of systems also used by the police?
“Among other things, the police use CAS, the Crime Anticipation System. This is a system by means of which they hope to predict where and when crime will take place and then tailor the deployment of police officers accordingly. The problem is that there is no scientific evidence whatsoever that it works. In Chicago, the police are currently working with a system that predicts not only at the neighbourhood level, but also who the possible perpetrator and the possible victim might be. The Dutch police are very interested in this.”
That sounds quite scary. Could this kind of system even be implemented in the Netherlands under the stricter European privacy legislation?
“The police intelligence services are constantly profiling on social media and secretly collecting data. There’s no good legal basis for that at all. At the same time, a new law on AI has just been adopted in the European Parliament, the so-called AI Act, which – among other things – prohibits biometric facial recognition on the street with exceptions for certain threats, such as the risk of a terrorist attack. It’s good that Europe is playing a pioneering role in this area, and that the legislation here is much stricter than in the USA or China.”
How does this system work in Chicago exactly?
“It draws up a ‘heat list’ using big data techniques, whereby an algorithm calculates a risk score based on data sets from the police themselves and public data about neighbourhoods and residents. They extrapolate this combination of data in order to arrive at someone who is a potential perpetrator or victim.”
So if you happen to score very much like a criminal for all these points but never do anything wrong, could you still be considered a criminal?
“Yes, and this has happened many times with these kinds of systems. By the way, I’m not saying that algorithms never serve a purpose. You can have huge successes with them. Take the EncroChat hack, for example, in which millions of messages were captured on criminals’ phones and searched using algorithms and network analyses.”
“But that’s hindsight for you. Looking ahead is quite different. My expectation is that in the end, looking ahead will become less and less important and looking back and looking in real time will become more and more important. Because that does work, and you also run less major ethical risks in the process. It breaks the trend of what has long been thought in the social and scientific debate, or for example in science fiction films such as Minority Report, because that was all about looking ahead.”
Is the current conclusion that the emphasis on technological solutions has led to a reduction in trust in the police?
“Not in a causal way. Trust in technology can lead to a distance from the people you need for your police work. It seems that in the case of the explosions in Rotterdam, the police mainly rely on technology and less on local residents, even though you definitely need that position of trust. But our trust in the government has been compromised by ethnic profiling and discriminatory algorithms. As a result, the police should not use systems that can lead to discrimination or ethnic profiling.”