Publishing Date: 6 April 2025
Author: Maria Fritsch
AI facial recognition, aickoto.com, April 1, 2025.
Facial recognition technology (FRT) is no longer dystopian fiction. In the European Union, FRT use in surveillance and law enforcement is significant, which then impacts fundamental rights. Governments, corporations, and law enforcement agencies use FRT for verification, identification, and categorisation. Identification and categorization allow authorities to track and/or may infringe collective and individual rights, particularly privacy and freedom of expression. The EU Artificial Intelligence Act (AI Act) aims to regulate high-risk AI systems, including FRT. But does it truly protect individuals from fundamental rights violations?
One of the greatest weaknesses of the EU AI Act is its dependence on self-assessment. This limits its effectiveness as a regulatory instrument. While the Act introduces obligations for FRT systems, it lacks strong enforcement tools. Enforcement oversight is resource-intensive. It also requires substantial coordination among multiple regulatory agencies. Therefore, AI providers may easily not comply with obligations.
A key issue with enforcement is the pre-market risk assessment, which determines whether an FRT qualifies as high-risk. Without sufficient resources and an oversight assessment process for this assessment, it may be ineffective. The AI Act initially designated high-risk AI systems by default if they are in categories stated in Annex III. Later versions introduced a pre-market risk assessment. Providers can now evaluate themselves to see if their systems are a risk to fundamental rights. This has weakened regulatory oversight as companies can self-evaluate their AI systems. By claiming they are not high-risk, they can avoid compliance obligations.
The AI Act exempts AI systems from being high-risk under Article 6 if they meet vague criteria. For example, performing a narrow procedural task or improving a previously completed human activity. However, these vague exemptions create a legal loophole. This allows AI providers to frame their FRT systems in a way that meets these vague criteria.
Another major concern is that AI providers can place their FRT systems on the market immediately after conducting an internal risk assessment. They do not need external approval. This seems influenced by industry lobbying. This process benefits AI developers by easing regulatory barriers. At the same time, it puts stress on under-resourced regulatory bodies. Article 6 of the AI Act mandates that AI systems be registered in a public database. However, the risk assessment process is inaccessible to the public. Therefore, the lack of transparency prevents independent oversight. It makes it difficult for civil society to assess FRT system compliance with the AI Act. The lack of enforcement mechanisms and vague exemption criteria are important problems. Additionally, limited transparency allows AI providers excessive discretion. Without more decisive measures, the AI Act may not efficiently prevent the deployment of FRT systems that restrict fundamental rights.
Another limitation of the EU AI Act is its inability to adapt to the fast evolution of FRT. While the Act regulates existing FRT systems, it does not address the quick development of more intrusive FRT. Multiple scholars argue that technological advancements in AI have outpaced regulatory developments. The AI Act's risk-based approach is reactive. Therefore, FRT may be on the market before they are efficiently regulated. This is especially true in the EU, where the legislative process can take years. They could severely undermine fundamental rights without continuously updating mechanisms to regulate modernising FRT.
This suggests that while the Act is a significant step in AI regulation, there are regulatory gaps. AI providers assess their systems, potentially avoiding oversight. In addition, the development of FRT may outpace the Act. Furthermore, the lack of transparency in risk assessments limits public oversight. The current AI Act does not provide efficient protection. Stronger enforcement mechanisms and independent oversight are required. Facial recognition technology is advancing quickly. However, regulation struggles to keep up, turning AI governance into a race against time. If the AI Act does not develop alongside FRT, fundamental rights may not be efficiently protected.
©Copyright. All rights reserved.
We need your consent to load the translations
We use a third-party service to translate the website content that may collect data about your activity. Please review the details in the privacy policy and accept the service to view the translations.