you are being redirected

You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH :

01 February 2024

Regulating facial recognition technologies: is your face too personal?

Biometric technology, especially facial recognition, has taken off in recent years. From airport security to simply unlocking a smartphone, facial recognition is now a fixture in everyone's lives. Biometric tools are also becoming more and more popular with commercial brands and have proven to be an intriguing marketing tactic (a well-known pizza chain offering food recommendations based on how an AI reads human moods is both fascinating and troubling).

AI experiment

As part of our AI experiment in roadmap24, we have curated a few prompts and asked AI about this article. Take a look and find out what ChatGPT responded*:



Given, for instance, the amount and sensitivity of the personal data that biometric technologies process, and the impact this processing has on data subjects, it comes as no surprise that legislators are taking a closer look at this topic. Already back in June 2021, the Council of Europe issued its Guidelines on facial recognition divided into four main chapters: (i) guidelines for lawmakers; (ii) guidelines for developers, manufacturers and service providers; (iii) guidelines for facial recognition users (entities); and (iv) rights of data subjects. Just three months later, the European Parliamentary Research Service issued its own publication "Regulating facial recognition in the EU" in which it raised the topical issue of gender and racial discrimination due to the use of facial recognition.

The Guidelines

In May 2022, the European Data Protection Board adopted the first version of Guidelines 05/2022 on the use of facial recognition technology in law enforcement. After public consultations, a final version was adopted on 26 April 2023. The Guidelines aim to provide direction to national and EU legislators and law enforcement authorities on the application and use of facial recognition technologies. They also distinguish between two main functions of facial recognition technology: authentication and identification. Both consist in the processing of biometric data related to an identified or identifiable natural person and, therefore, constitute processing of personal data, and more specifically, processing of special categories of personal data.

For data controllers relying on facial recognition technologies, a fundamental obligation is ensuring that the reliability and accuracy of the facial recognition technology are regarded as criteria when assessing compliance with key data protection principles, particularly to confirm that the rules of fairness and accuracy of data processing are closely observed. This may be achieved by regularly and systematically evaluating algorithmic procedures. Another crucial aspect is establishing ground rules under which personal data used to evaluate, train and further develop facial recognition systems may only be processed with a sufficient legal basis and in accordance with common data protection principles.

The Guidelines notably refer to the EU Charter of Fundamental Rights and the European Convention on Human Rights, especially in the context of the right to and respect for privacy in the processing of personal data. According to the Guidelines, even the mere processing of biometric data itself constitutes a serious interference in the fundamental rights of data subjects. Therefore, any limitation of such rights must be clearly justified under the law. Facial recognition mechanisms thus should be used only if necessary and proportionate. This is also dictated by the necessity to comply with rules of non-discrimination, avoiding of bias and incorrect identification (false positive), which could potentially lead to severe implications also in criminal proceedings.

New challenges, new laws

Studies and practice show that algorithm discrimination, bias and favouritism occur frequently and can only be avoided by constant revision and improvement of the methodology and technologies. In June 2023, the European Parliament once again examined the related topics and in the highly anticipated Artificial Intelligence Act defined negative bias as a bias that "creates direct or indirect discriminatory effect against a natural person". The use of AI systems should therefore be based on respect for general principles establishing a framework that promotes a coherent human-centric approach to ethical and trustworthy AI.

Interestingly, a reference to the EU Charter of Fundamental Rights is made when preparing legislation on highly technological issues, as such general principles should be in line with the EU Charter of Fundamental Rights and the values on which the European Union is founded, including the protection of fundamental rights, human agency and oversight, technical robustness and safety, privacy and data governance, transparency, non-discrimination and fairness and societal and environmental wellbeing. Furthermore, AI systems using biometric categorisation according to known or inferred sensitive or protected characteristics are treated as particularly intrusive, since they may violate human dignity and hold great risk of discrimination. As such, the use of facial recognition is considered harmful and people should be protected against biometric surveillance in public spaces, emotion recognition in key sectors, biometric categorisation, predictive policing and social scoring.

A high degree of consideration for the rights of data subjects in the context of facial recognition is a step in the right direction, as the technology has a big impact on individuals and their daily activities. It should not, however, raise any concerns about the deterioration or decline of technological development. Time will tell whether the current approach being taken in the EU will survive or whether the new developments and efficiencies introduced by the use of AI will take over.


author: Daria Rutecka

AI experiment

* The AI add-on to this article ..

... has been curated by our legal tech team prior to publication.

... has been compiled by AI. Its results may not accurately reflect the original content or meaning of the article. 

... aims to explore AI possibilities for our legal content.

... functions as a testing pilot for further AI projects.

... has legal small print: This AI add-on does not provide and should not be treated as a substitute for obtaining specific advice relating to legal, regulatory, commercial, financial, audit and/or tax matters. You should not rely on any of its outputs as (formal) legal advice. Schoenherr does not accept any liability to any person who does rely on the content as (formal) legal advice.