Skip to main content Skip to section navigation

Claims by academic group calling for EU to abandon CSAM-blocking policies don’t stand up to real-world scrutiny


Written by , Director of IT with the Canadian Centre for Child Protection

Last week, an international group of academics called for the EU to abandon its pursuit of regulatory measures that would require tech companies to make efforts to detect the distribution of child sexual abuse material (CSAM) and attempts by offenders to sexually groom children on their platforms.

The group’s resistance is fundamentally grounded in the view that if the system for preventing the sexual exploitation of children on the internet is not perfect, then we should not bother implementing it at all. It is a very common refrain when attempting to set safeguards in digital spaces. Some of the claims are also erroneous, misleading, or both.

Our team at the Canadian Centre for Child Protection has more than 10 years of real-world experience using many of the technologies criticized by the group. Our expertise is not theoretical, it is applied: We have caused millions of CSAM images and videos to be removed from the internet using these proven tools. With this in mind, let’s unpack some of the claims in their open letter:

“Moreover, it is also possible to create a legitimate picture that will be falsely detected as illegal material as it has the same hash as a picture that is in the database (false positive). This can be achieved even without knowing the hash database. Such an attack could be used to frame innocent users and to flood Law Enforcement Agencies with false positives – diverting resources away from real investigations into child sexual abuse.” 

It is possible to frame innocent users and flood fire departments with false positives by pulling fire alarms. Instead of eliminating fire departments, we discourage this type of anti-social behaviour with legal penalties. Moreover, creating a false positive image without access to the master database is a lot more time-consuming and resource intensive than pulling a fire alarm.

“As scientists, we do not expect that it will be feasible in the next 10-20 years to develop a scalable solution that can run on users’ devices without leaking illegal information and that can detect known content (or content derived from or related to known content) in a reliable way, that is, with an acceptable number of false positives and negatives.”

This is the core issue: "an acceptable number" of false positives and negatives is never specified. If the "acceptable number" is zero, this holds CSAM detection to a higher standard than all other abuse mitigation technologies. Consider radar guns, traffic enforcement cameras, airport security scanners, drug sniffing dogs: every detection methodology has a risk of false positives, but we don't abandon those technologies; instead we build protocols and policies around them to address the limitations and risks to ensure false positives don’t carry consequences for innocent individuals. This same argument was made against PhotoDNA scanning of images; the scanning was implemented years ago and the false positive apocalypse has never materialized.

“At the scale at which private communications are exchanged online, even scanning the messages exchanged in the EU on just one app provider would mean generating millions of errors every day.”

It was stated in the letter “no open and objective evaluation has taken place that demonstrates their effectiveness,” so what is the basis for this estimate?

“False positives are also an inevitability when it comes to the use of detection technologies -- even for known CSAM material.”

It is true that no detection system is flawless, however there are countless examples of detection systems where safety and security are thoughtfully balanced against the false positive rate.

“The only way to reduce this to an acceptable margin of error would be to only scan in narrow and genuinely targeted circumstances where there is prior suspicion.”

Again, "acceptable" is left wide open and we are supposed to read it as "zero."

“….the large number of people who will be needed to review millions of texts and images.”

Another number without a citation.

“Second, a virus can be recognized based on a small unique substring, which is not the case for a picture or video: it would be very easy to modify or remove a unique substring with small changes that do not change the appearance; doing this for a virus would make the code inoperable.”

Not true, according to industry; e.g. see https://www.kaspersky.com/resource-center/definitions/what-is-a-polymorphic-virus. "[R]esearch published last year showing that a staggering 97 percent of viruses analyzed had polymorphic properties." Virus scanners use imprecise matching and heuristic scanning just like any other detection technology.

“Such tools would reportedly work by scanning content on the user’s device before it has been encrypted or after it has been decrypted, then reporting whenever illicit material is found. One may equate this to adding video cameras in our homes to listen to every conversation and send reports when we talk about illicit topics.”

In a CSS regime, only media in-transit (i.e. moving from device to external server) is “checked” for CSAM. Note that “checked” is the terminology WhatsApp uses for “scanning” when describing how it performs CSS for malware; see https://faq.whatsapp.com/667552568038157/.

Media at-rest on the user’s device is not subject to “checks.” Rather than video cameras in our homes, CSS is more like a smoke alarm, performing a vital safety function without interrupting our normal activities.

“The only deployment of CSS in the free world was by Apple in 2021, which they claimed was state-of-the-art technology.”

Technology companies have been using CSS techniques in different contexts for years. The messaging app Signal used a technique involving truncated hashes of the phone numbers in your local address book to perform contact discovery; see https://signal.org/blog/private-contact-discovery/. Apple is currently using CSS to detect sensitive photos; see https://support.apple.com/en-us/HT212850.

“This effort was withdrawn after less than two weeks due to privacy concerns and the fact that the system had already been hijacked and manipulated.”

Unless this has a citation, it is complete speculation about Apple's motives.

“When deployed on a person’s device, CSS acts like spyware, allowing adversaries to gain easy access to that device.”

Completely false and misleading. There is no technical connection between client side scanning and allowing someone to gain access to a device.

“Any law which would mandate CSS, or any other technology designed to access, analyse or share the content of communications will, without a doubt, undermine encryption”

Also untrue; there is no technical link between CSS and encryption.

“Even if such a CSS system could be conceived, there is an extremely high risk that it will be abused. We expect that there will be substantial pressure on policymakers to extend the scope, first to detect terrorist recruitment, then other criminal activity, then dissident speech.”

On what basis do they assess the "extremely high" risk of abuse and the willingness of governments to compromise CSS? This same argument was leveled against server-side PhotoDNA scanning 15 years ago; where are the allegations of such abuse?

“If such a mechanism would be implemented, it would need to be in part through security by obscurity as otherwise it would be easy for users to bypass the detection mechanisms, for example by emptying the database of hash values or bypassing some verifications.”

This is also untrue; many security mechanisms (DVD and blu-ray players, HDMI content protection, mobile SIM locking, firmware update restrictions, mobile device security, computer hard drive encryption) rely on on-device security without obscurity. Jailbreaking an updated iPhone, for instance, is not at all easy. These measures aren't perfect, but companies continue to invest in them because they work.

“We have serious reservations whether the technologies imposed by the regulation would be effective: perpetrators would be aware of such technologies and would move to new techniques, services and platforms to exchange CSAM information while evading detection.”

This is a fatalist argument used by groups opposed to the imposition of rules or policies in a certain area. Would these same academics suggest that society also not restrict ease of access to firearms on the basis that ill-intentioned individuals may find new avenues to acquire these items? Mechanisms for suppressing illicit behaviors are never perfect, nor do they have a defined end-point. They are ever-evolving and designed to inject friction and barriers into systems that can be exploited to commit harm.

Speaking from actual experience, we successfully detect and remove tens of thousands of these images daily using these perceptual hashing technologies. We don’t stop this effort because offenders make efforts to evade detection – we adapt and innovate to the change in threat, just as technology companies should be expected to.

“It is user complaints rather than AI that in practice lead to the detection of new abuse material.”

According to Google, their AI-based Content Safety API processes millions of images each year: https://protectingchildren.google/tools-for-partners/. Presumably, they and their partners are using this tool because it does in fact detect previously unseen images of abuse material.

-30-

About the Canadian Centre for Child Protection: The Canadian Centre for Child Protection (C3P) is a national charity dedicated to the personal safety of all children. The organization’s goal is to reduce the sexual abuse and exploitation of children through programs, services, and resources for Canadian families, educators, child serving organizations, law enforcement, and other parties. C3P also operates Cybertip.ca, Canada’s tipline to report child sexual abuse and exploitation on the internet, and Project Arachnid, a web platform designed to detect known images of child sexual abuse material (CSAM) on the clear and dark web and issue removal notices to industry.

Support our work. Donate today.

Be a part of something big. As a registered charitable organization, we rely on donations to help us offer our programs and services to the public. You can support us in helping families and protecting children.

Donate Now