AI Act: EU Parliament’s legal office gives damning opinion on high-risk classification ‘filters’

EU policymakers have been discussing some filter conditions that would enable AI developers to avoid complying with the EU Artificial Intelligence’s law stricter regime. But this political compromise is running into significant legal troubles.

The AI Act is an EU landmark legislation to regulate Artificial Intelligence based on its capacity to cause harm. As such, the law follows a risk-based approach whereby AI models posing a significant risk to people’s health, safety and fundamental rights must comply with a stricter regime on aspects such as data governance and risk management.

The original Commission proposal designated as high-risk all AI systems that fell into a pre-determined list of critical areas and use cases, such as employment and migration control. Both the European Parliament and Council, the EU co-legislators, removed this automatism and introduced an extra layer to avoid AI solutions with minimal risk being captured in the stricter regime.

Filter conditions

At the last negotiating session on 2 October, EU policymakers discussed a compromise text introducing three filter conditions, meeting which AI providers could consider their system exempted from the high-risk category.

The conditions concern AI systems that are intended for a narrow procedural task, merely to confirm or improve an accessary factor of a human assessment, or to perform a preparatory task.

Some extra safeguards were added, namely, that any AI model that carries out profiling would still be deemed high-risk, and national authorities would still be empowered to monitor the systems registered as non-high-risk.

Profound doubts

However, MEPs requested the EU Parliament’s legal service to provide a legal opinion. The opinion, seen by Euractiv and dated 13 October, casts profound doubts on the legal soundness of the filter conditions.

Remarkably, the legal experts noted that it would be up to the AI developers to decide whether the system meets one of the filter conditions, introducing a high degree of subjectivity that “does not seem to be sufficiently framed to enable individuals to ascertain unequivocally what are their obligations”.

While the compromise text tasks the European Commission to develop guidelines on applying the filters, the legal office notices that guidelines are by nature non-binding; hence, they cannot alter the content of the law.

Most importantly, for the Parliament’s legal office, leaving this level of autonomy to AI providers is dubbed “in contrast with the general aim of the AI act to address the risk of harm posed by high-risk AI systems”. This contradiction is seen as conductive of legal uncertainty.

Similarly, the opinion deems the filter system at odds with the principle of equal treatment, as it could lead to situations where high-risk models are treated as non-high-risk and vice versa, and proportionality, as it is deemed incapable of achieving the regulatory aim of the AI Act.

The legal experts put two conditions for a filter approach to be legally sound. First, the exemption conditions should be sufficiently precise to leave no margin of error in the classification of AI models. Second, it should be the legislator to assess whether an AI application poses a high risk or not.

In other words, instead of horizontal exemption conditions that apply to all use cases, EU policymakers are invited to take a case-by-case approach and define under which conditions the AI solutions in a certain critical area do not pose a significant risk.

Finally, the EU lawyers took aim at the powers the compromise text attributed to the European Commission, which was tasked with updating the filter conditions via a delegated act. For the Parliament’s legal office, that should not be the case because the filter conditions “constitute essential elements of the act.”

Legal opinions can be highly influential in shaping EU legislation. EU policymakers are now expected to produce a revised version of this critical aspect of the AI law.

[Edited by Nathalie Weatherald]

Read more with EURACTIV