Skip to main content
  • News blog
  • 10 July 2023
  • Directorate-General for Communication
  • 6 min read

EUvsChildSexualAbuse – hard truths or hypothetical scare-tactics

For my EU proposal to fight child sexual abuse, the last two weeks have been a tale of two articles.

One by Bob Shilling, co-founder of the Brave Movement, survivor of child abuse and former INTERPOL head of crimes against children.*

The other by Professor Carmela Troncoso, SPRING Lab, and Professor Bart Preneel, COSIC on behalf of technology academics.**

The articles are starkly different in their approach.

Bob’s article speaks of the human consequences of child abuse. He talks about the dread he endured as a child. About the failure of adults to protect him. And of how he then converted his suffering into a life-long resolve to protect other children.

He talks about the fact that behind every image is a real child. About how many of these images are of toddlers and babies. He explodes some of the myths and misinformation around this subject.

Particularly the myth (that the expert article repeats) that this is ‘only’ about teenagers sending each other images. That this is about ‘experimentation’.

He is searingly accurate on this: Babies are not posting images. They are being abused, horrifically.

And his organisation, the Brave Movement knows that my proposal is necessary. I want to thank them again for their work.

On the other hand, we have the article that only speaks about technical hypotheticals. This article is from a letter by technology sector academics.

There is a token reference to the need to fight child abuse. It states that; “it is the responsibility of government authorities, with the support of companies and communities, to undertake effective interventions which prevent this crime and react to it quickly when it does happen”.

Unfortunately, the letter and article then go on to expound multiple hypotheticals to undermine the effective interventions I propose. They outline a litany of ‘could, might, may’ speculations with questionable reasoning.

Detection has been happening for ten years

First of all, the article attempts to call the detection of child sexual abuse material a new situation. This is a false claim, and one that has been repeated by many other technology sector voices.

It’s a misunderstanding the Commission proposal introduces detection of child sexual abuse online – this has been happening for more than ten years. Last year 1.5 million reports of child sexual abuse in the European Union, containing five million images and videos of sexual violence against children. 70 percent of these EU reports originate in online chat, email or messages.

At the moment digital service voluntarily detect and remove this content, both known content and new sexual abuse content, as well as extortion or grooming. This has not led to any game-changing scenario when it comes to privacy for internet users.

Secondly my proposal is what is called ‘technology neutral’. It doesn’t impose any specific technology. To do so would instantly date the proposal and render it obsolete.

The letter explicitly asks for my proposal not to be taken forward. If that advice is followed all current detection should stop. Once the temporary legislation to allow companies to detect material lapses next year, detection will be rendered legally impossible. Remember, that is detection that has been practised effectively for more than ten years and has been instrumental in rescuing children from abuse.

Denying the possibility to use available technologies is a continent-wide ‘go’ signal to child abusers. They will be able to operate with virtual impunity.

Practical method to save children – in real time

This detection is often the only way to start investigations (and trigger up to 80% investigations in some Member States) and rescue children from abuse. This abuse takes place behind closed doors but is shared online in completely ordinary messenger services, where Internet companies can find it. All too often, children are scared to report the abuse. There is a responsibility on the companies.

Rather than look at the hypothetical we need to face the reality. And that reality is the following: one in five children suffers from some form of sexual violence. The internet has greatly worsened the spread of child sexual abuse, allowing perpetrators to meet online and instantaneously share videos and pictures of severe sexual violence against children - often very young children including babies. And abuse images are shared thousands of times, re-traumatising the victims.

Safety locks in place

My proposal focuses on prevention. Unlike today, detection will only happen as a matter of last resort if prevention fails. Providers must prevent child sexual abuse on their services. Only when they can’t or won’t, could they be faced with a detection order when there is a significant risk of abuse.  

Now let’s look at the checks and balances

Service providers will submit risk assessments and if needed propose mitigating measures. Coordinating Authorities in Member States consider these and can ask providers to consider additional mitigating measures. Only if these are inadequate, can the coordinating authority  start the process to suggest issuing a detection order. The coordinating authority will first inform the service provider and the EU centre of its intentions and ask for comments. The service provider must draft an implementation plan with specific measures for all stakeholders involved in the process (coordinating authority, data protection authority, EU Centre) to consider. These must be compliant with the General Data Protection Regulation (GDPR).

The coordinating authority then considers whether the risk outweighs the negative consequences of a detection for all users, and on that basis can ask a court (or equivalent body depending on the legal system of the Member State) to issue a detection order as a measure of last resort. It is the independent authority or court that decides to issue the order.

Both the Coordinating Authority in its draft and the independent authorities or courts must target the order to what is strictly necessary.

A new EU Centre to Prevent and Fight Child Sexual Abuse will assist this process.

Detecting child abuse and only that

It is legally and technically impossible to use this legislation for anything else than prevention and detection of child sexual abuse. Companies will only be allowed to detect child sexual abuse as defined by EU law – the 2011 Directive on combating child sexual abuse – and nothing else. The technology used can only be based on indicators of child sexual abuse and grooming classifiers as provided by the EU centre and nothing else.

The new EU Centre will act as a filter to make sure no false positives reach law enforcement.

The law leaves no space for any malicious actor to intervene. and newly provides for the oversight including the on-sight controls by the coordinating authorities.        

Child sexual abuse is the only crime where the content is the crime: the abuse pictures and videos are in themselves a criminal offense.

Meanwhile, the alternative solution suggested by the expert letter is  woefully inadequate: user reporting of abuse. As Bob writes, 40 percent out of the 7 million images in Interpol’s database are of toddlers and babies being raped. This type of abuse is normally only shared by  perpetrators, invisible to other users. Needless to say, a baby can’t report abuse. 

We need this regulation.

*https://www.aljazeera.com/opinions/2023/6/23/behind-every-child-sex-abuse-image-there-is-a-real-child

** https://www.euronews.com/2023/07/04/detecting-child-sexual-abuse-material-shouldnt-be-done-at-any-cost

Details

Publication date
10 July 2023
Author
Directorate-General for Communication