People have mailed me the images and sent them through Twitter, threatening to dox me. I reported them for harassment and having child sexual abuse material; nothing happened.
Experiences of child sexual abuse material survivors: How technology companies' inaction leads to fear, stalking, and harassment
The recording of their abuse creates ongoing layers of trauma for survivors of child sexual abuse material, including knowing that the abuse material has been or could be shared and viewed by offenders around the world. This report, based on responses from 281 survivors, focuses on survivors’ experiences with the distribution of recordings of their abuse and policy solutions to mitigate its harms.
The report outlines what online services providers did – and did not do – when survivors asked them to remove child sexual abuse material, and the reasons why other survivors haven’t done so.
Underscoring the importance of ensuring online service providers stop the distribution of child sexual abuse material, this report shares survivors’ worries about being recognized from this material. It also recounts the online and offline physical, sexual, and verbal abuse endured by survivors who have been recognized and presents solutions for these online harms that are grounded in what survivors shared.
This report builds on our International Survivors’ Survey, published in 2017.
Read the Report
What we learned
- The majority of survivors (59%1) had not asked online service providers to remove child sexual abuse material. Their reasons included not knowing how to report or find the imagery, or not knowing that reporting was an option. Others didn’t want to view the imagery or didn’t expect service providers to cooperate.
- Many survivors described negative experiences with online service providers when seeking removal. Service providers were slow to remove the child sexual abuse material, ignored survivors’ requests, or refused to act on them.
- Three quarters2 of survivors worried someone would recognize them from the child sexual abuse material. Most common were fears of being further victimized by the original offender or new offenders.
- Being recognized from child sexual abuse material was almost always a catalyst for new harms, both online or offline3, such as doxing, stalking, harassment, and further abuse, both sexual and non-sexual.
- Anonymized, encrypted, and decentralized services amplify harms to survivors. Survivors highlighted technical aspects of online services, such as Tor-based websites, decentralized peer-to-peer networks, and end-to-end encrypted private messaging services that make it difficult to request the removal of child sexual abuse material.
I have definitely been harassed due to my sexual abuse and exploitation history, being sent threatening and taunting messages about raping and assaulting me, how they think I deserve it and wanted it, how it's good or right that I was raped and abused and exploited, describing in detail what they would do to me, and sending me d*ck pics and telling me about how my abuse gets them off.
Policy recommendations for governments and online service providers
Our recommendations to reduce the continued victimization of survivors and prevent them from having to look for and report child sexual abuse material fall under three categories: proactive measures, reactive measures, and survivor-centric moderation practices.
While we encourage online service providers to voluntarily adopt these recommendations, given their industry's decades of failures to protect children and survivors, we believe change will be dependent on governments enforcing regulatory frameworks.
In the context of preventing child sexual abuse material distribution and further harm to survivors, these mechanisms include:
- Preventing users from uploading known child sexual abuse material by using widely available image and hash matching technologies. These technologies should be used on all platform features that allow users to upload and share content, including private messaging features;
- Flagging for review the upload of suspected child sexual abuse material that is not known to authorities by using a combination of privacy-compliant AI classifiers, and human moderation;
- Ensuring sufficient human moderation resources to review and respond to the amount of content on the service, and to detect and block child sexual abuse material that isn’t already known to authorities;
- Blocking, or reviewing with a high degree of scrutiny, the upload of content from users who mask their identity and location, such as accessing an online service using Tor or virtual private networks (“VPNs”); ,
- Establishing “know your client” practices that are adapted based on risk assessments. For example, online services where users can upload adult pornography should require more robust user-verification practices than online services where users submit written product reviews;
- Verifying the age and actual, non-coerced consent of all individuals shown in user-uploaded adult pornography, before making the material available on the platform;
- Designing systems to ban users who upload child sexual abuse material and prevent them from creating new accounts;
- Disrupting offenders’ activities and community building by moderating online services based on behavioural patterns and the presence of other indicators alongside content (e.g., sexual commentary, terminology known to be associated with child sexual abuse material);
- Sharing data with other industry partners about users who have attempted to share child sexual abuse material or have been flagged under Point 8 (similar to information sharing related to users suspected of fraud, spam, money laundering, or other computer-based crimes);
- Preventing users from sending sexually violent or harassing comments and private messages by blocking known key words and phrases associated with child sexual abuse.
Preventative systems for stopping abuse or victimization are simply one measure of many to safeguard survivors and children. When known child sexual abuse material goes undetected by proactive measures or when new child sexual abuse material is uploaded for the first time, a second line of defense for survivors is needed. In addition, these measures need to be responsive to other types of content such as harmful-abusive material, as well as the distribution of private information about victims or survivors. Reactive measures to address such photos, videos, and text include:
- Providing barrier-free options for anyone — registered users and visitors — to report content that may revictimize survivors. Importantly, survivors must not be required to identify themselves;
- Ensuring options for reporting content that may revictimize survivors are clearly and prominently displayed and promoted, so users and visitors can easily access available remedies;
- Prioritizing the urgent review and expeditious removal of reported content that may revictimize survivors. Doing so will mitigate the child sexual abuse material distribution and protect survivors’ privacy and safety.
Many ongoing harms to survivors are exacerbated by online service providers failing to use moderation practices that center on survivors of child sexual abuse material. Components of such an approach include:
-
Erring on the side of caution and safety. This includes:
- When evaluating possible child sexual abuse material based on sexual maturation characteristics, and it is unclear whether the person is an adult or a child, assume they are a child and remove the content. Online service providers may consider allowing uploaders to appeal the decision, and restore content if they prove everyone in the material was an adult at the time, non-coerced consent was given, and the consent was and remains valid;
- When a user flags potential child sexual abuse material, automatically hide the content, so it cannot be viewed or distributed while awaiting review;
- When someone reports material they say depicts child sexual abuse of them, take their word for it. Do not require survivors to prove their age or identity.
-
Acting on content that may re-victimize survivors of child sexual abuse when reported, such as:
- Photos and videos of survivors as children that do not meet a criminal threshold. Though legal, this material can nonetheless be harmful. For example, they may be part of a collection of sexual abuse content (e.g., a photo of a clothed survivor, which is a still from the start of an abuse video) or may be used to signal the availability of child sexual abuse material (e.g., photos of them in a bathing suit);
- Photos and videos of now-adult child sexual abuse material survivors, to help protect them from recognition and associated harms;
- Personal information about survivors, such as names, addresses, phone numbers, social media accounts, and family members, to minimize harms like doxing, stalking, and harassment.
- Providing individuals who submit abuse reports with details about actions being taken, a report reference number, and contact information for further inquiries.
- 1 100 of the 169 survivors who spoke to whether they requested that online service providers remove child sexual abuse material. ↩
- 2 100 of the 135 survivors who stated whether they worry about being recognized from the abuse imagery. ↩
- 3 Of the 82 survivors who were recognized from the abuse imagery, 78 experienced further harassment or abuse. ↩