AI Act: Czech EU presidency makes final tweaks ahead of ambassadors’ approval

The Czech presidency of the EU Council shared with the other EU countries on Thursday (3 November) the final version of the AI Act, a flagship EU legislative initiative, which is set to be approved at the ambassador level by mid-November.

The AI Act aims to introduce the first comprehensive set of rules for Artificial Intelligence based on the potential for harm. The Czech presidency put the file at the top of its digital agenda and is well on its way to finalising the EU Council’s position.

As anticipated by EURACTIV, the latest text introduced only minor changes to a version from two weeks ago. The AI Act is scheduled to receive the approval of the Committee of Permanent Representatives on 18 November and the final adoption by EU ministers at the Telecom Council meeting on 6 December.

General purpose AI

The final text confirms the Czech presidency’s solution to apply the AI rulebook to general purpose AI, large models that can be adapted to execute various tasks. Most EU countries agreed to task the European Commission with tailoring the obligations for these systems via an implementing act.

The new compromise clarifies that the provisions related to the obligations for high-risk AI providers, the appointment of a legal representative in the Union, the declaration of conformity with EU law, and the post-market monitoring will only apply to general purpose AI providers once the implementing act enters into force.

Law enforcement

Upon requests from the member states, the Czech presidency introduced large carveouts for law enforcement authorities.

One of the most significant ones empowers police forces to request the relevant national authority to put into use a high-risk system that has not passed the conformity assessment procedure.

This authorisation might be bypassed in exceptional circumstances such as an imminent threat to a person’s life.

However, new wording has been added in the case that the authority rejects the request, obliging the law enforcement agency to discard all the results and outputs resulting from that system.

Public assistance benefits

The regulation’s preamble has been changed to clarify that the system determining the legitimacy of the entitlement to public assistance benefits and services from the public sector is to be considered at high-risk.

This wording recalls a massive scandal in the Netherlands, where tax authorities mistakenly suspected thousands of citizens of benefit fraud due to a flawed algorithm.

The text was also aligned with the inclusion of certain insurance services like life and health insurance on the list of high-risk systems.

Critical infrastructure

In terms of critical infrastructure, the elements purely dedicated to cybersecurity have been excluded from the high-risk classification.

Only safety components that “might directly lead to risks to the physical integrity of critical infrastructure and thus to risks to health and safety of persons and property” have been included under the high-risk category.

Transparency

The AI Act requires specific AI systems like deep fakes to comply with transparency obligations unless it is reasonably evident they are manipulated content.

In this regard, the new text specifies that individuals belonging to a vulnerable group in terms of age or disability should be considered when complying with these provisions to avoid discrimination.

[Edited by Zoran Radosavljevic]