Exclusion of private messaging features from proposed Online Harms Act leaves a substantial threat to children unaddressed
Purpose
This briefing note is to inform readers about the Canadian Centre for Child Protection’s (C3P) view that Canada’s proposed Online Harms Act (Bill C-63) ought to be amended to ensure private messaging services and certain aspects of private messaging features be subject to regulation, given that a significant amount of harm experienced by children occurs in these exact digital environments.
Key takeaways
- One of the main attack vectors reported by victims of online child sexual exploitation in Canada is private messaging features.
- However, the current draft of the Online Harms Act covers only operators of social media services, expressly excludes private messaging features of those services, and does not cover services that only provide private messaging.
- Despite criticism abroad about regulating private messaging services/features, public and all-party support for inclusion can be achieved within Canada.
- Recommendation: Scope in and subject architecture and design of a private messaging service itself to regulation (and private messaging features within social media services), such as discoverability algorithms, abuse reporting functions, etc.
Issue
In its current form, private messaging services (e.g. Messenger®, WhatsApp®, Wizz®, Signal®, Telegram®, etc.) and private messaging features (e.g. chat functions within Instagram®, Snapchat®, TikTok®) are excluded from the scope of Bill C-63:
Exclusion of service
5 (1) For the purposes of this Act, a service is not a social media service if it does not enable a user to communicate content to the public.
[...]
Exclusion of private messaging feature
6 (1) The duties imposed under this Act on the operator of a regulated service do not apply in respect of any private messaging feature of the regulated service.
Given the reality of how children are being sexually exploited online, this raises concerns about whether the current inclusion criteria would limit the effectiveness of a future Online Harms Act to achieve its intended purpose of safeguarding children online. In short, as it relates to the child protection components of the Bill, the current inclusion criteria of Bill C-63 is not targeting a substantial risk currently facing children. There is also concern about the scope of the Bill given that upcoming generations of children are increasingly engaging online in more fractured private and semi-private spaces (Snapchat groups, Discord channels, gaming-based chats), as opposed to more traditional public-facing social media forums popularized in the 2000s and 2010s such as Facebook, Twitter, Tumblr®, and MySpace®1.
The best available information about online child sexual exploitation in Canada (which is at the heart of Bill C-63’s structure) shows private messaging services and features are often the main attack vectors used to commit crimes against or harm children:
- Data from Canadian law enforcement shows Child Luring — which typically occurs over private messaging — represented the large majority (77%) of online sexual offences against children and youth reported between 2014 and 2020 2. The rate of Child Luring increased 69% between 2014 and 2022 3.
- Data from Cybertip.ca — Canada’s tipline for reporting the online sexual abuse and exploitation of children — shows that for the vast majority of victims (both children and young adults) who reach out for support, offenders used private messaging features or services to initiate or facilitate their abuse. Also consider that private messaging was the main attack vector for effectively all 7,000+ reported online sextortion cases reported to Cybertip.ca over the last 24 months.
Private messaging risks have also been reported in the UK, where the Office for National Statistics (ONS) found4:
- Online messages received by children who were contacted by a stranger were received in a private message (74%).
- Around 1 in 10 children (11%) aged 13 to 15 years reported receiving a sexual message, while 1 in 100 reported sending a sexual message, in the previous 12 months.
Discussion
Intense debates in other jurisdictions such as the UK and the EU that have proposed legislation related to the regulation of private message features may appear to be politically insurmountable due to the optics/framing of concerns over privacy issues and state surveillance. However, it is worth noting that most of those debates focused on issues related to (1) mandatory proactive hash scanning of private communication for known child sexual abuse material (CSAM), and (2) doing so within end-to-end encrypted environments. The criticisms did not extend to the peripheral features, architecture, or design of the messaging features that can also reduce harm.
Reality check: Current industry practices with private messaging features
It is important to note that many large social media, gaming and private messaging services most Canadians interact with currently make use of moderation and/or technology to reduce child sexual exploitation within or surrounding their private messaging features5. These tools and associated companies include, among many other examples available in Appendix A:
- The use of PhotoDNA by Snapchat, Twitter, Discord and Facebook Messenger on images shared privately in an effort to detect and disrupt the exchange of known CSAM;
- The use of language models by TikTok to detect CSAM using textual analysis;
- The use of PhotoDNA by Google to detect known CSAM in Gmail.
Regulate private messaging features without “encroaching” on private communication
C3P believes proactive hash scanning for known CSAM within private communication environments can and is routinely done in ways that provide users with significant privacy protections while balancing the need to curb technology-facilitated harms/crimes, as many prominent online service providers have been demonstrating for years (see Appendix A).
Even without proactively scanning message contents for digital signatures of files containing known CSAM, there are ways that online service providers can ensure private messaging services or features are safer for children without “encroaching” on the contents or substance of private messages.
In many cases, mitigation measures involve making use of transmission data which encompasses data related to the routing of a communication (for example, telephone number, Internet Protocol address, port numbers, date and time) and information that allows service providers to authenticate the user and their devices (for example, the International Mobile Equipment Identity (IMEI) number or Subscriber Identity Module (SIM) card). Transmission data itself does not reveal the substance, meaning, content or purpose of the private communication.
Consistent with many industry practices, such examples include ensuring private messaging features:
-
Have functions that allow users to report abuse within their private messaging bubble;
- User reports triggered from within a messaging feature are among WhatsApp’s (which is end-to-end encrypted) main sources of NCMEC reports.
- Currently Apple iMessenger does not have reporting tools or functions for users to report abuses other than spam6.
-
Consider how user search functions and discoverability impact safety.
- Arguably, if a private communication service allows users to search for individuals by name, region, or any other personal characteristic (as opposed to private communication services that require the users to have prior knowledge of each other's direct contact information) it would be more appropriate to classify it as a social media service.
- In reaction to reports of sextortion among youth, one cybersecurity expert remarked that “Instagram is effectively the world’s largest search directory of teens, for scammers and predators”7.
- Popular apps such as Wizz (once dubbed “Tinder for kids”8) allowed strangers to indiscriminately contact minors worldwide for sextortion attacks.
-
Evaluate suspicious behaviours using signals/intelligence derived from the metadata on the periphery of the content of a private message, such as:
- Accounts that have been blocked or reported by many users;
- Accounts that add high volumes of users to a group, while sharing no common network;
- Accounts that have excessively high outgoing message to incoming message ratios;
- Accounts that send high volumes of messages to users not found in their personal contacts.
-
Establish robust/safe privacy settings by default;
- Public friend lists are often used to gain leverage for blackmail, including family members of the child9.
- Limiting private messaging with children only to those found within their existing personal contacts.
-
Consider the impact of providing real-time geolocation of users to other users;
- These services have been roundly criticized by privacy/safety advocates over the years, yet continue to be deployed by companies10.
- Provide user verification and/or age verification to age-gate certain services when appropriate;
-
Provide users with the ability to geo-gate their experience to limit the scope of the community a child interacts with;
- Reports suggest most sextortion offenders are from a group of “high risk” countries11. Users ought to have the ability to limit their geographic platform experience to a specified region that makes them feel safe.
- Have strong, easy to use parental controls and policies in place on services that allow children;
- Limit the ability for users making use of anonymization tools such as Tor/VPNs to engage with users or to benefit from certain functions or features;
-
Limit the ability of virtual phone numbers to register accounts;
- WhatsApp, for example, blocks the use of virtual phone numbers, landlines and toll-free numbers to register accounts, forcing the use of real SIM-based numbers14. This measure, according to WhatsApp, enhances the security of its network, reducing the potential risk of abuse/exploitation within their end-to-end encrypted private messaging service1516.
None of the above listed safety measures require an online service provider to “encroach” on the private communication of users. They do, however, have the potential to dramatically mitigate harm to users by addressing preventable risk factors on the periphery of the private communication itself, by addressing architecture and design considerations.
Recommendation
Amend the proposed Online Harms Act to ensure private messaging services and private messaging features are included in the list of online service providers that would be subject to the Act and regulations. This may require broadening the definition of social media, and/or not making the ability to “communicate content to the public” the foundation of the inclusion criteria.
Appendix A. Safety measures used by Industry in private messaging services/features
The following summary is derived from the Australian eSafety Commissioner’s Basic Online Safety Expectations (BOSE) reports1718 in which major tech companies were required to submit responses to a series of questions about their Trust and Safety practices. These questions were included in notices given under section 56(2) of the [Australian] Online Safety Act 2021.
Platform | Platform PM Feature | Technology Used on PM Feature |
---|---|---|
Apple | iCloud email | PhotoDNA for known CSEA images |
Apple | iMessage | “Communication Safety” tool identifies nude content and warns child users of the risks of sending and viewing this material |
Meta | Facebook Messenger | If not E2EE, tools to detect known CSEA images: PhotoDNA, PDQ, proprietary matching technologies. Tools to detect unknown CSEA images: Google’s Content Safety API. Language analysis tools (only used to prioritize reports) |
Meta | Instagram direct messages | If not E2EE, tools to detect known CSEA images: PhotoDNA, PDQ, proprietary matching technologies. Tools to detect unknown CSEA images: Google’s Content Safety API. Language analysis tools (only used to prioritize reports) |
Meta | Cross-platform (Instagram/FB) | ”Meta said it uses machine learning to analyse behavioural data ‘across our platforms’ to identify inappropriate interactions between an adult and teen. Meta stated that these processes do not include the scanning of private messages but do include the ability to identify indicators of potentially harmful intent by an adult toward a teen” |
Meta | WhatsApp (user profile and group images, images and videos in user reports, group subject and description) | PhotoDNA and other proprietary technologies to identify known CSEA images; use an internally developed tool for hashing and matching video frames for detecting known CSEA material in video; uses Google’s Content Safety API and an internal classification model to identify new CSEA material. |
Microsoft | Consumer version of Teams (not E2EE) | PhotoDNA and MD5 are used to identify known CSEA images. PhotoDNA for Video is used to identify known CSEA videos |
Microsoft | Xbox Live® | PhotoDNA and MD5 are used to identify known CSEA images. PhotoDNA for Video is used to identify known CSEA videos. Also use ‘multiple tools and processes’ to detect grooming, this includes ’text analysis'. |
Microsoft | Skype® | Hash matching tools PhotoDNA and MD5 for images, and PhotoDNA for Video on video, on Skype Messaging when it is not end-to-end encrypted |
Microsoft | Outlook® | Hash matching tools PhotoDNA and MDA for known CSEA images. |
Snapchat | Direct Chat | Uses PhotoDNA for images and CSAI Match for videos that are uploaded from a user's phone onto Snapchat - NOT on images and videos taken directly on the platform (these are taken in real time) |
Google Chat (Consumer only) | PhotoDNA, SHA 256, Other proprietary technologies for known images | |
Gmail (Consumer only) | PhotoDNA, SHA 256, Other proprietary technologies for known images | |
Direct Messages | PhotoDNA, Safer by Thorn, and internal tools (including machine learning models) to detect known images | |
TikTok | Direct Messages | Users can only share images that have already been posted on the platform and when the post is set to not private, at this point the content has already been though the moderation process which includes hash matching) before it is shared. PhotoDNA, Google Content Safety API, CSAI Match, internal model for known CSEA videos that are public but shared in direct messaging. Tiktok also uses an internal computer vision model, Audio and NLP model, Google Content Safety API for detecting unknown images. Internal TikTok NLP tool for language detection. |
Twitch | Whispers® | Block urls to known CSAM using Crisp and internal tools (including language analysis). Uses language analysis technology to detect CSEA related terms and grooming, including tools from Crisp and others |
Discord | Direct Messages | PhotoDNA, CLIP for known images, CLIP to detect new CSEA |
Footnotes
- 1 https://www.commonsensemedia.org/research/the-common-sense-census-media-use-by-tweens-and-teens-2021 ↩
- 2 https://www150.statcan.gc.ca/n1/pub/85-002-x/2022001/article/00008-eng.htm ↩
- 3 https://www150.statcan.gc.ca/n1/pub/85-002-x/2024001/article/00003-eng.htm ↩
- 4 https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/bulletins/childrensonlinebehaviourinenglandandwales/yearendingmarch2020 ↩
- 5 https://www.esafety.gov.au/sites/default/files/2022-12/BOSE%20transparency%20report%20Dec%202022.pdf ↩
- 6 https://support.apple.com/en-au/guide/iphone/iph203ab0be4/ios ↩
- 7 https://www.linkedin.com/posts/raffile_meta-revokes-job-offer-to-sextortion-expert-activity-7196976442771947522-W5vt ↩
- 8 https://www.nbcnews.com/tech/social-media/wizz-tinder-app-aimed-teens-removed-apple-google-stores-rcna136607 ↩
- 9 https://protectchildren.ca/en/resources-research/an-analysis-of-financial-sextortion-victim-posts-published-on-sextortion/ ↩
- 10 https://www.grimsbytelegraph.co.uk/news/snapchat-snap-map-nspcc-warning-180757 ↩
- 11 https://networkcontagion.us/reports/yahoo-boys/ ↩
- 12 https://www.esafety.gov.au/industry/tech-trends-and-challenges/anonymity ↩
- 13 https://www.hackerfactor.com/blog/index.php?/archives/720-This-is-what-a-TOR-supporter-looks-like.html ↩
- 14 https://faq.whatsapp.com/684051319521343/ ↩
- 15 https://faq.whatsapp.com/1369114327051380/ ↩
- 16 https://www.theregister.com/2023/05/16/ftc_xcast_illegal_robocalls/ ↩
- 17 https://www.esafety.gov.au/sites/default/files/2022-12/BOSE%20transparency%20report%20Dec%202022.pdf ↩
- 18 https://www.esafety.gov.au/sites/default/files/2024-03/Basic-Online-Safety-Expectations-Full-Transparency-Report-October-2023_0.pdf ↩