In the UK we have become used to surveillance cameras. It is estimated that there are 1.85 million CCTV cameras in the UK or one camera for every 36 people. As we go about our lives we are, on average, seen by 70 cameras every day. There are 309 cameras at the Oxford Circus tube station alone. And while the cost-to-benefit ratio can be debated, CCTV cameras have been credited with a small but statistically significant reduction in crime and they helped police identify the Novichok killers in Salisbury.

We are also getting used to facial recognition in our lives. We can use our face to unlock our phones. The FaceApp challenge induced us to share photos of our aged selves on social media. At the airport we use ePassport gates to enter the UK and some airlines are trialling face recognition for boarding as well. There is even an app on offer to record attendance using face recognition. These are the uses of face recognition that are transparent to users.

There is also the other kind. Face recognition was used for almost two years in privately operated CCTV cameras overlooking a busy pedestrian street at King’s Cross. Furthermore, police passed the CCTV operator images of seven people to use in the face recognition system. Officially, this was done  to allow the property company “to discharge its responsibilities to prevent and detect crime” although it is unclear how exactly this was supposed to happen. This agreement was kept secret from the public as well as the Metropolitan Police and the office of the mayor. Unfortunately, the parties involved have declined to release any information about how these images were used and what impact it has had. It took almost two years for the public to be informed about the arrangement.

One new technology on its own is progress, but two new technologies combined can be truly transformative. The first GPS satellite was launched in 1978, but before smartphones a dedicated GPS receiver was necessary to use it. The first receiver versions had limited displays and showed just your coordinates which you yourself could then mark on a paper map. But together the combination of GPS and smartphones allow us to never be lost as long as we have internet access and a charged battery. Ubiquitous availability of GPS has changed our view of the world. It allows us to move through it much more purposefully, it reduces the value of detailed local knowledge and it eliminates the question, “Where am I?” We might not know where our destination is, but we know exactly where we are.

The deployment of a comprehensive network of CCTV cameras equipped with accurate face recognition systems will, if left unregulated, be equally transformative for society, with effects that we don’t fully appreciate yet. The ACLU recently released a report entitled “The Dawn of Robot Surveillance” that analyses the potential risks of this combination of technologies and security expert Bruce Schneier has written an article about it.

Surveillance cameras used to be passive. They record us, but most of the time the video shows nothing of interest. For humans the process of “monitoring video screens is both boring and mesmerising” and paying humans to monitor cameras is expensive. As a result, most of the surveillance footage is never watched and is eventually overwritten. This is changing. Computers don’t get bored, and don’t lose focus and AI technology has advanced sufficiently in the last years to make automated, real-time analysis of surveillance videos feasible and cost effective. As the ACLU report writes,

It is as if a great surveillance machine has been growing up around us, but largely dumb and inert—and is now, in a meaningful sense, "waking up."

Today, facial recognition is not perfect. The Metropolitan Police tested facial recognition technology in London by deploying cameras and comparing the people captured on them against outstanding arrest warrants. Out of 46 potential matches flagged by the system only 8 were correct identifications. That is an error rate of more than 80%. But we shouldn’t feel comforted by this lack of accuracy. Today we are talking about facial recognition not being accurate enough. Five years ago such a deployment was not even considered feasible. Considering that a lot of money is being invested in surveillance technology and many smart people are working on improving face recognition, where will we be five years from now?

Of course, regardless of the invested effort, facial recognition won’t be perfect. No AI-based technology will ever be perfect, but it will be good enough that the mistakes won’t matter. This is not true. Mistakes will matter. They always matter to the individual whose face and identity have been mistaken. But, statistically—and those who decide about deployment usually think statistically—if mistakes are rare enough, they become a cost that is accounted for and accepted.

Where does this leave society? Following the 2011 riots in London and England police used CCTV footage to identify and charge more than 2000 people who participated in the riot. To achieve this the Metropolitan Police assigned 450 detectives to review footage, identify suspects and carry out arrests. The severity of the crimes justified the labour-intensive process of manually going through hours of surveillance footage that was necessary to bring rioters to justice.

April this year London saw the Extinction Rebellion protests with protestors occupying Oxford Circus, Marble Arch and Waterloo bridge for several days and the police arresting more than 1000 activists during their effort to dissolve the protests. Imagine for a moment a surveillance system that would allow the police not just to film the protests, but automatically identify each person in the video and track each person throughout the protests. It would radically decrease the financial cost of identifying who to arrest and of collecting evidence to be used in court. For each person police would have access to an automatically collected dossier of protest activity. And the lower the cost of prosecution, the higher the temptation to prosecute in order to be seen as tough on crime. But is it in the public interest to prosecute Extinction Rebellion activists? Or did the protests serve an important role in the public debate about the climate crisis?

What can we do? On October 30, Lib Dem peer Lord Clement-Jones introduced a private members’ bill that would outlaw the use of facial recognition technology in public places in the UK until a review, to be commissioned by the Secretary of State, has been conducted. This bill would not stop the development of the technology, but it would create the opportunity for the society to pause, take a breath and reflect on the impact of facial recognition.

Similar efforts are happening elsewhere in the world. In May, San Francisco became the first city in the US to ban the use of face recognition by local agencies, including law enforcement. Since then, Somerville, Oakland and Berkeley have followed suit with other US cities considering similar bans. And the European Commission is also considering to restrict “the indiscriminate use of facial recognition technology” by businesses and public authorities.

We now have a window of opportunity to influence how face recognition technology will be used in the future: the technology is good enough so that we can see its potential effects, but it has not yet been deployed widely. Once the technology has spread far enough, restricting or even regulating its use will become much more difficult. Let us act now so this does not happen.