AI Act: Leading MEPs revise high-risk classification, ignoring negative legal opinion

The EU lawmakers spearheading the work on the EU’s AI bill have circulated a new version of the provisions regarding the classification of high-risk AI systems, maintaining the filter-based approach despite a contrary legal opinion.

The AI Act is a landmark EU legislation to regulate Artificial Intelligence following a risk-based approach. At the heart of the law is a stricter regime for AI systems that pose a significant risk to people’s health, safety and fundamental rights, which must comply with tight requirements regarding risk management and data governance.

In the original proposal, all AI solutions falling under a pre-set list of critical use cases were deemed automatically high-risk. In the past weeks, EU policymakers have discussed a series of exemption conditions allowing AI developers to avoid the high-risk classification.

However, this approach has been harshly criticised by the European Parliament’s legal office, which considered it to run counter to the very objective of the AI Act and conductive of legal uncertainty.

The legal experts did not rule out a filtering system completely but considered that the exemption conditions needed to be narrower and specific to each critical use case rather than leaving AI providers in the position of self-assessing their models.

Exemption conditions largely maintained

Still, on Friday (20 October), the offices of the EU Parliament’s co-rapporteurs Dragoș Tudorache and Brando Benifei circulated a new version of the text, seen by Euractiv, that maintains horizontal exemption conditions and largely overlooks the negative legal opinion.

The new text was discussed at a meeting with the representatives of the other political groups on Monday (23 October), ahead of a negotiating session with the EU Council and Commission the following day.

The exemption criteria have been tweaked, with additional examples introduced in the text’s preamble to explain their application better. The role of market surveillance authorities and the delegated power of the EU Commission have been further refined.

A specification was introduced that the criteria apply, including if the AI model is not “not materially influencing the outcome of decision making”.

The first criterion applies when the AI system is intended to perform a narrow procedural task. The example given is that of an AI model that transforms unstructured data into structured data or classifies incoming documents into categories.

The second way to avoid the high-risk regime is if the AI solution is meant to review or improve the result of a previously completed human activity, merely providing an additional layer to human activity. This might be the case of AI models used to improve the language of a document.

Thirdly, if the AI system is purely intended to detect decision-making patterns or deviations from prior decision-making patterns to flag potential inconsistencies or anomalies, for instance, the grading pattern of a teacher.

The fourth criterion was untouched and relates to AI models used to perform preparatory tasks to an assessment relevant to the critical use cases. Here is the preparatory role of the output that is considered to make the system’s impact low in terms of risk. Examples include file-handling software.

At the same time, the idea that any AI system carrying out people’s profiling will be deemed high-risk regardless of these criteria was maintained.

The previous iteration of the text also required AI providers who considered their systems not to be high-risk to outline their assessment in the technical documentation and be available to provide it to the national market surveillance authority upon request.

The new text indicates that market surveillance authorities should be able to carry out evaluations of the AI system if they have sufficient reasons to consider that such an AI system should be considered high-risk and, if the AI system should indeed be high-risk, they should be able to request that the system is brought into compliance with this Regulation.

A fine can be imposed if the market surveillance authorities have ‘sufficient evidence’ that the AI provider misclassified their system to circumvent the AI law.

The European Commission is empowered to update the criteria in light of technological developments or to align them to amendments to the list of critical use cases.

Two new conditions were introduced for these delegated powers to take effect: there is concrete and reliable evidence of AI systems that fall in the high-risk category but do not pose a significant risk; the new criteria do not decrease the overall protection of the AI law.

[Edited by Nathalie Weatherald]

Read more with EURACTIV