Belgian EU presidency presents new risk assessment methodology for child sexual abuse law

A new document written by the Belgian EU Council presidency and seen by Euractiv outlines key details for the risk assessment that will form the backbone of a draft law to detect and remove online child sexual abuse material (CSAM).

The document follows the latest approach by the Belgian presidency to the draft law on CSAM, which put the focus on the Coordinating Authority’s roles, such as risk categorisation or detection orders.

The new document, dated 27 March, was sent to the Council’s Law Enforcement Working Party (LEWP), which is responsible for tasks concerning legislation and operational issues associated with cross-border policing.

The Coordinated Authority is a designated body in each EU country responsible for receiving risk assessments, implementing mitigation measures, and coordinating efforts to detect, report, and remove CSAM.

In its approach from 13 March, the Belgian presidency asked member states to provide technical concerns about detection technologies, so that more safeguards can be included in the final legislation.

Based on the member states’ suggestions and the LEWP meetings on 1 and 19 March, the presidency drafted the new document, which gives details about possible criteria and categorisation methodologies to be used in the practical part of the CSAM legislation.

The document also emphasises that nothing should contradict fundamental rights, a point that should be explicitly stated in the regulation.

Possible risk categorisation criteria

The latest document mentions that in the previous text, a methodology was already suggested to evaluate the risk associated with services or their components, categorising the services into three risk levels, taking into account the risk assessment and risk mitigation measures.

The new document outlines an approach to categorising potential risks associated with online services.

The first suggested categorisation is based on the type of service offered, such as social media platforms, electronic messaging services, and online gaming platforms, among others.

The second delves into assessing the core architecture of these services, including factors like user interaction levels, identification functionalities, and communication methods.

The third categorisation is about evaluating the effectiveness of policies and safety features implemented by service providers, particularly in safeguarding child users, covering aspects like age verification, parental controls, and handling of potential online child sexual abuse.

This categorisation examines user tendencies and statistical patterns, considering factors like account usage, solicitation risks, and company policies on user safety, including pre-moderation functionalities and content delisting systems.

The fourth involves analysing user tendencies and statistical patterns, including user behaviour assessments, popularity across age demographics, solicitation risk mapping, and account-related factors such as the use of anonymous accounts and patterns indicating potential risks like fake accounts or identity obfuscation.

The fifth categorisation focuses on evaluating the service’s safety policies. This includes assessing the usage of premoderation functionalities, the implementation of content delisting systems, and the employment of image masking techniques to enhance user safety and privacy.

Possible scoring methodologies

The risk categorisation system proposes various scoring methodologies to be applied to a set of parameters. These methodologies include binary questions, hierarchical criteria, or sampling methods. A combination of these approaches may be integrated into the procedure if needed.

Two scoring methodologies are proposed for the risk categorisation system.

The “binary methodology” involves yes/no questions about the service’s architecture, with each response impacting the final score divided into four risk categories.

The “multi-class scoring with four hierarchical criteria methodology” assesses the effectiveness of policies and features in preventing child sexual abuse, ranking them as “absent,” “basic,” “effective,” and “comprehensive,” with each level indicating a different risk score.

The sampling methodology involves compartmentalising data for analysis with specific data types sampled, as well as defining procedures for collection and analysis.

This could involve analysing CSAM data or metadata related to user accounts to assess risk factors such as anonymous account usage or frequency of account changes. Data collection, processing, and trend analysis are necessary steps in implementing this methodology, the text says.

The next step, according to the document, is to pinpoint the main aspects of the content and extract key principles.

[Edited by Eliza Griktsi/Zoran Radosavljevic]

Read more with Euractiv

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading