EU nears consensus on child abuse draft law, new agency takes lead on privacy preservation

A proposed centralised agency to support the detection and removal of child sexual abuse material (CSAM) will also be assessing how to technically preserve privacy while detecting such content in text communications included in the law’s scope, according to the latest compromise of the draft law to fight CSAM.

The draft, dated 14 June and seen by Euractiv, gives examples of visual communications included in authorities’ powers to detect CSAM, as previously reported by Euractiv.

The proposal was sent by the Belgian Presidency of the EU Council to the Permanent Representatives Committee (COREPER), including the 27 ambassadors. This could mean that the file, stuck in the legislative pipelines for months, could finally be resolved.

As Euractiv reported, during last week’s Justice and Home Affairs Council meeting, the Belgian presidency addressed delegations’ concerns on the draft law.

The Belgians aim for a COREPER agreement, with Commissioner for Home Affairs Ylva Johansson expecting the interinstitutional negotiations, known as trilogues, to start after the summer.

The previous draft excluded audio communications from the scope, but included visual content.

The latest version excludes text communications and clarifies that detection orders apply only to visual content, from images and video components, to GIFs and stickers. However, the document says that the solicitation of children should still be identified through visual components as much as possible.

The regulation aims to create a system for detecting and reporting online CSAM.

It faced criticism for potentially allowing judicial authorities to request the scanning of private messages on platforms like WhatsApp or Gmail, which are currently protected through end-to-end encryption.

EU Centre’s role in privacy

The latest draft passes the baton to the EU Centre and the European Commission when it comes to end-to-end encryption (E2EE). The regulation should not weaken cybersecurity measures, including E2EE, the draft says.

E2EE ensures that only the sender and receiver can read a message, keeping it private even from the platform provider, such as WhatsApp or Signal.

To maintain E2EE, technologies intended for detecting CSAM in E2EE services must be certified and tested by the EU Centre, and agreed upon by the European Commission before the latter approves “the technologies that can be used to execute the detection orders”.

The EU Center is also envisioned as helping platform providers in assessing the cost of anonymised data analysis to detect CSAM. Providers must implement parental control mechanisms, handle reports of potential CSAM, and generate statistical data for assessment.

Providers can seek technical support from the EU Centre about privacy-preserving age verification measures, with costs covered by the Centre for micro, small, or medium-sized enterprises. The Commission may issue delegated acts on cost-sharing.

Delisting and blocking orders

The new text requires online search engine providers to delist websites displaying CSAM.

Delisting and blocking orders can be issued by a member state’s competent or judicial authority to online service providers. These companies must then notify users about the delisting reasons, so that they can exercise their right to redress.

Competent authorities are the national juridical authorities. Member states requiring judicial authorisation must inform and update the Commission.

Providers must promptly notify the issuing authorities of any inability to comply, report on actions taken to block access to CSAM, and regularly update on their effectiveness.

Risk assessment

Risk assessments must be updated at least once every three years for low-risk services, at least once every two years for medium-risk, and at least once every year for high-risk services, the new text reads.

The Coordinating Authority categorises services by risk level and can request assistance from the EU Centre, based on reports providers submit to the Authority.

Results classify services as high, medium, or low risk, ensuring qualitative and comparable assessments.

The Commission may establish additional acts to define risk categorisation methodology and criteria.

Coordinating Authorities, which ovesee the implementation of the law in each country, can demand updates from other member states and low- to medium-risk service providers. Assessments should pinpoint risks to specific service components or user groups, aiming to mitigate CSAM effectively.

Providers of hosting or interpersonal communication services must mitigate identified risks of CSAM, focusing on specific parts or users.

They should provide accessible tools for users to report CSAM and share information about hotlines. Statistical data collection should evaluate risks, ensuring that no personal data is included.

[Edited by Eliza Gkritsi/Zoran Radosavljevic]

Read more with Euractiv

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading