Show Your Face and AI Knows Who You Are
Biometric recognition systems observe and track people using their biometric characteristics, after which they process the resulting data. Such measurable biometric characteristics include facial features, gait, voice, or patterns in the iris of your eye. These recognition systems are used in the workplace, during examinations (“proctoring”), in law enforcement, and in public spaces (for example in supermarkets, at train stations, or in parks).
By matching people‘s faces that were filmed in public spaces with images from a database, AI systems identify individuals from a crowd. Police authorities using such systems usually do not know whether a wanted person they are looking for can actually be found there. Not only is an individual suspect’s biometric data matched with a database, but the data of all the people who are or were at the scene – even of those who are not involved in any criminal proceedings.
Generally, measures taken in police and judicial prosecution of criminal offenses must not affect the protection of public safety. Restricting fundamental rights is only to be justified legally or if an overriding objective in the public interest is being pursued. The core of fundamental rights must remain unaffected.
Technical surveillance and biometric identification systems are designed to process data from a very large number of people and match the gathered data with data from databases. For this reason, their use in publicly accessible areas amounts to mass surveillance.
Biometric identification systems are currently being tested and deployed prematurely throughout Europe and beyond. They already operate in stadiums, airports, casinos, and schools. Police authorities use them for law enforcement, and in several countries the systems have been used for social distancing control during the COVID-19 pandemic.
The identification can be carried out live and on site, but also remotely and retrospectively. In case of the latter, the matching with databases does not happen in real time but a bit later, using video recordings. Those who are being monitored cannot possibly know exactly when the monitoring actually takes place. Many up-to-date cameras are already equipped with such a function.
The term “facial recognition” often refers to biometric recognition. It has become a synonym for “remote biometric identification.” But what do “identification,” “real-time,” “retrospective,” and “remote” actually mean?
Identification vs. authentication
Biometric remote identification must be distinguished from biometric authentication. With AI-supported verification tools, people unlock their phones with their fingerprints, for example. There is no mass collection of data and no matching with databases involved. The users themselves handle the set-up and the data remains on the device.
Real-time vs. retrospective identification
Biometric data is processed either in “real time” (when data analysis and data collection coincide) or “retrospectively,” in which case captured data is analyzed at some point later. But what exactly does “later” mean? After a minute or after a day? No one can explain the difference to real-time analysis, as the term was not defined.
In some cases, fundamental rights are particularly at risk when data is processed retrospectively. Governments or authorities such as the police can use sensitive personal data to track where people have been, what they have done, or with whom they have met – over the course of weeks, months, or even years. This could, for example, keep journalists’ sources from giving them important information, as they can no longer be sure of remaining anonymous.
Remote identification
What does “remote” in “remote identification” refer to?
In a case of remote identification, cameras installed everywhere at an airport record the many people around in order to process their biometric data. The data matching takes place away from the place where the data was collected, and the people recorded are not actively taking part in the identification process. Such active participation occurs, for example, if people put their fingers on a screen to have their prints taken. How far away a location must be to be considered “remote” is not defined.
Essentially, an identification is considered “remote,“ if there is a physical distance between the data collection and the data processing on one hand, and if the people whose biometric data is collected and processed are actively involved on the other.
Secure technology in an insecure world?
Security authorities and security system providers tout face recognition as an innovative and reliable method of improving law enforcement. The need for security in our society is fundamentally legitimate. However, it becomes problematic when it leads to fundamental rights being undermined.
If people can be identified or monitored in public spaces at any time, this not only violates their right to privacy but also has a deterrent effect: They could be prevented from exercising other fundamental rights, such as freedom of expression or freedom of assembly, i.e., from taking part in demonstrations or visiting venues that might provide information about their political or sexual orientation. As biometric features are part of the body, they can only be hidden in public with some effort. In the USA, for example, students demonstrating on university campuses covered their faces and bodies to stop face and gait recognition systems gathering any usable data.
Experience shows that particularly repressive governments count on this effect, as happened recently in Argentina. Two days before a major demonstration, the government threatened to use face recognition to identify people and then cut their social benefits. As a result, only a few people took to the streets. The government had successfully intimidated the population and prevented them from public political protest.
Such consequences of biometric mass surveillance typically hit already disadvantaged individuals and groups as well as political activists particularly hard. In Russia, people were arrested who had attended dissident Alexei Navalny‘s funeral. They were identified by face recognition software that had analyzed images of the funeral service taken from surveillance cameras or circulating on social media.
Technological discrimination
In using biometric surveillance systems, fundamental rights are often and disproportionately restricted without a legal basis, as such systems jeopardize peoples‘ freedom without contributing significantly to greater security. The systems do not work as well as the providers want us to believe at all. Time and again, they consider innocent people to be dangerous.
In a test run at Berlin’s Südkreuz station, around one in 200 people was wrongly categorized as “wanted”, which amounts to 600 such false positives every day. Wrongly suspected people are subjected to unpleasant checks. If this was expanded the police would permanently have to manage a considerable amount of extra work due to the false alarms. This would result in a lack of resources elsewhere.
Face recognition technology has also been proven to identify dark-skinned and female faces less well. As a result, people of color and women are more often incorrectly reported as suspicious or wanted. This can have serious consequences for them: unjustified checks or even arrests.
The data used to train the systems is one reason for this discrimination against people of color and women. If training data is not representative or contains a disproportionate amount of data from white people and men, the systems are less likely to recognize black women. This increases the probability of false positives, as many cases show. Nonetheless, systems with representative data could also be used in a discriminatory way.
In 2018, a man in Detroit was wrongly identified by face recognition software and then accused of shoplifting. As a result, the city agreed to pay the man compensation and reviewed how the local police force uses the technology. After the assessment, no more arrests were allowed to be made solely on the basis of face recognition results. Old cases were to be reviewed.
Police in New Orleans used facial recognition technology 15 times since October 2022. With one exception, this was to identify black suspects. In only three cases did the use of the technology lead to catching criminals.
A woman in advanced pregnancy was arrested in the USA after a face recognition tool had reported her as a suspect. All documented arrests following such false positives have involved black people.
In the UK, a supermarket surveillance system with face recognition falsely identified a customer as a known shoplifter. She was escorted out and told not to enter the supermarket chain’s stores again in the future. The service provider later admitted to the system having made a mistake. Many retailers in the UK have installed this system in their stores.
Silkie Carlo from the NGO Big Brother Watch filmed many police operations in which face recognition was used. She observed that the police use of face recognition ultimately is a digital line-up.
Now or later? Surveillance through the back door
The final version of the EU’s AI Act prohibits police and law enforcement from using biometric surveillance in public spaces. However, it allows for many exceptions and gives law enforcement, security, and migration authorities a great deal of leeway. Some areas fall completely out of the AI Act’s scope: military, defense, and national security. Such broad exemptions for law enforcement and security authorities invite to expanding public surveillance throughout Europe.
In December 2021, the governing coalition in Germany stated in its coalition agreement that biometric recognition in public spaces had to be forbidden under European law. However, this position referred to real-time recognition. Strict requirements for biometric identification in real time are now in place. On the other hand, retrospective and remote biometric identification in public spaces is much easier to justify under the law. The mere suspicion of a criminal offense allows the use of such systems.
Critics fear that retrospective biometric identification could lead to data retention, for example if the data serves as evidence or can be used in a manhunt. They do not want video monitoring of large events (such as the Olympic Games) or certain places (in order to later evaluate biometric data and identify people) to become standard procedure.
Despite all the risks: biometric recognition is spreading in Europe
The AI Act lets EU countries customize the rules on biometric surveillance. As biometric surveillance has not been completely banned across Europe, bans would therefore have to happen at national level. The German government could still enforce a complete ban according to the coalition agreement.
Meanwhile, German law enforcement agencies are already working on expanding their abilities to use biometric recognition systems, for example for tracking internet users with biometric data. This would jeopardize the right to anonymity on the internet as well as data protection, self-determination, and the presumption of innocence. The German Federal Minister of the Interior, Nancy Faeser, nonetheless succumbs to the wishes of law enforcement by amending the law. Not only are such measures unconstitutional. They are also incompatible with said coalition agreement that mandates the right to anonymity on the Internet be preserved.
German authorities have already operated on the edge of legality on several occasions when using biometric recognition technologies. Saxon police used a live face recognition system at the Polish border in Görlitz. The responsible data protection authority had not been informed and considers the system to be illegal. In order to test face recognition software, the Federal Criminal Police Office provided a research institute with a data set of three million images in 2019, the legality of which is highly questionable.
On a continental level, the EU works on “securing” the external borders by installing biometric recognition systems. The D4FLY project combines „2D+thermal facial, 3D facial, iris and somatotype biometrics.” In projects such as iBorderCtrl, governments examine emotions and “micro-expressions,” fleeting facial expressions that last only fractions of a second, to assess whether travelers are trustworthy or lying to (virtual) border officials. Such automated risk assessments could lead to stricter security checks at EU borders.
Such pilot projects are often tested temporarily without a legal basis. And after such a test phase, it has proved to be much easier to introduce technologies without a prior public debate about them.
Due to the risks of biometric identification systems, various cities around the world (including San Francisco, Portland, and Nantes) have already banned the use of recognition systems in public spaces. In 2021, the UN Office of the High Commissioner for Human Rights also spoke out in favor of significantly restricting or banning the use of biometric recognition systems in publicly accessible spaces. The European Data Protection Board and over 200 non-governmental organizations worldwide have also warned of the this technology’s social consequences.
link