Nude deepfakes flood the internet amid legislative vacuum

Nude deepfakes, including those of minors, are becoming increasingly common online as the tools to create them become more accessible – yet the law is still behind in regulating such material.

Deepfakes, a term used to refer to synthesised visual content designed to swap or alter the identities of people depicted, can be created with many purposes, from entertainment to disinformation.

In September, a hoax video circulated depicting Florida Governor Ron DeSantis announcing that he was dropping out of the 2024 presidential race; in October, Hollywood actor Tom Hanks told his Instagram followers that there was a “video out there promoting some dental plan with an AI version of me. I have nothing to do with it.”

However, for victims whose identities have been used in sexual content without consent, the experience can be traumatising – and a life-changing event.

Back in May, due to a deepfake pornographic video, Muharrem Ince, one of the main presidential candidates in Turkey, had to withdraw his candidacy. The video was created using footage from an Israeli pornography site.

The effects of the abuse of this technology are even more acute when minors are involved.

In the same month, more than 20 teenage girls in Spain received AI-generated naked images of themselves with pictures taken from their Instagram accounts, in which they were fully clothed.

According to a research report published in October by the UK’s Internet Watch Foundation (IWF), Artificial Intelligence is increasingly being used to create deepfakes of child sexual abuse material (CSAM).

“They’re kind of everywhere” and are “relatively easy for people to manufacture”, Matthew Green, associate professor at the Johns Hopkins Whiting School of Engineering’s Department of Computer Science, told Euracitv.

“Everywhere” can include AI websites not specifically made for this type of content, Green said.

Susie Hargreaves OBE, Chief Executive of the IWF, also said that “earlier this year, we warned AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers”.

“We have now passed that point”, she said.

Is this illegal?

While circulating pornographic content with minors is illegal, introducing the features of a minor into a pornographic image made by consenting adults is a grey legal area that puts the flexibility of national criminal codes to the test.

In the Dutch Criminal Code, there is “a terminology with which you can cover both real, as well as non-real child pornography”, Evert Stamhuis, Professor of Law and Innovation at Erasmus School of Law in Rotterdam told Euractiv. It is a “broad and inclusive crime description”, he said.

Stamhuis said Dutch courts always try to interpret the terminology in a way that covers new phenomena, such as deepfakes of child sexual abuse material, “until it breaks. And there is always a point when it breaks.”

However, in his experience, this remains rare. Even though a law might be outdated, “the same type of harm that the legislators wanted to tackle with the traditional approach also occurs in the new circumstances”.

In Spain, some of the AI-generated photos were made by the girls’ classmates. But, according to Stamhuis, it does not make a difference for the crime being committed whether a juvenile or an adult is creating such material. But it makes a difference for the right to prosecute.

Yet, the Dutch Criminal Code might be more the exception than the rule in its capacity to address this issue. Manuel Cancio, a criminal law professor at the Autonomous University of Madrid, told Euronews in September that the question is unclear in Spain and many other European legal frameworks.

“Since it is generated by deepfake, the actual privacy of the person in question is not affected. The effect it has (on the victim) can be very similar to a real nude picture, but the law is one step behind,” Cancio said.

Deborah Denis, chief executive of The Lucy Faithfull Foundation, a UK-wide charity which aims to prevent child sexual abuse, said that “some people might try to justify what they’re doing by telling themselves that AI-generated sexual images of children are less harmful, but this is not true”.

“Sharing AI-generated Child Sexual Abuse Material is a criminal offence in most member states,” EU law enforcement agency Europol told Euractiv, adding that they have been informed “about the case by the Spanish authorities, but we had not been asked to provide support”.

At the EU level, policymakers are currently discussing the AI Act, which might include transparency obligations for systems generating deepfakes, such as including watermarks to clearly indicate an image is manipulated.

Enforcement problem

However, law enforcement agencies are facing an uphill battle in detecting the suspect content among the billions of images and videos shared online every day.

“The dissemination of nude images of minors created with AI has been a concern for law enforcement for a while, and one bound to become increasingly more difficult to address,” Europol said.

The law enforcement agency added that AI-generated material must be detected using AI classifier tools. Still, at the moment, they are not authorised to use classifier tools for this specific purpose.

However, professor Green noted that technologies aimed at detecting child sexual abuse material are only about 80% accurate, and the success rate is expected to decline with the rise of deepfakes.

According to Stamhuis, “the strength for software development and production is within the big technology firms,” which also own the major social media platforms, and so they could indirectly benefit from this footage going viral.

[Edited by Luca Bertuzzi/Nathalie Weatherald]

Read more with EURACTIV