The clock is ticking to save children.
Last year, internet companies reported 85 million images and videos of child sex abuse online worldwide. A large part of this material involves very young children and very serious crimes, including the rape of babies.
Critics rarely mention that detection of known abuse has already been going on for ten years or more. If you use Gmail, Facebook, Dropbox, Twitter or Discord, or other household brands, your pictures are already now being checked for known child sexual abuse by Microsoft’s PhotoDNA – which has an error rate of one in 50 billion.
Technology detecting new child sexual abuse is extremely accurate, it tells the difference between babies in the bathtub, and babies being raped – in particular when combined with human review.
Roblox – a game many children play – detects grooming in chat.
All this stops in two years’ time, when temporary legislation expires, that permits Internet companies to detect child sexual abuse on their services.
When that happens, police will be blindfolded – In Member States, police depend for up to 80 percent on tips on child sexual abuse originating from Internet companies. A report from an Internet company triggered the Bergisch Gladbach investigations in Germany, which identified over 400 suspects and saved 65 children.
It will also become extremely difficult to take down images and videos of rapes of children – which continue to traumatise the victims long after the physical abuse has ended.
Taking down these images and allowing police to continue to rescue children – these are the goals of my legislation.
I listened to criticism, that today’s current voluntary efforts are too comprehensive. That’s why I propose to move from broad, voluntary reporting to a targeted, mandatory procedure.
Unlike now, searches in future will take place according to indicators defined by
the new Centre for fighting Child Sex Abuse, based on EU law and confirmed by Courts in EU Member States.
Prevention through safety by design is key. Detection can only happen if prevention doesn’t work, and the risk is significant enough.
The decision on detection does not lie with service provider, nor with a single Member State, but will be a result of a dialogue between industry, national authorities, the European Data Protection Board, and the EU Centre.
Any detection will be targeted to the risk, limited in time, and strictly limited as to what is being detected and how.
We don’t know of any cases where false positives led to law enforcement action – despite 10+ years of detecting.
Even so, the EU Centre will check for false positives before sending material to law enforcement. No innocent pictures will end up in police databases.
We don’t expect more reports, rather reports of better quality, more actionable for law enforcement.
In short: my proposal fights a clear and present danger, allows detection of child sexual abuse to continue with many more safeguards than we have today.
We now need to decide what kind of digital world we want our children to live in.
To protect the rights of children and all Internet users, we need an open and honest debate, based on facts. Fearmongering about false positives or surveillance does not contribute to this discussion.
Critics should realise: if they get what they want, all detection efforts in the European Union will end. If that’s what they want, they should be honest and openly admit it.
- Publication date
- 5 July 2022
- Directorate-General for Communication