People who use messaging services need to be able to exercise agency in how they communicate. This includes being able to manage privacy trade-offs and also to address unwanted or abusive content such as spam, mis- and disinformation, harassment, and sexually exploitative content. In the current debates around addressing child sexual abuse material (CSAM) in end-to-end encrypted (E2EE) environments, technical experts have proposed a variety of approaches to addressing abusive content, including user reporting, metadata analysis, and automated scanning of user-generated content. Given that there are many different kinds of users with unique needs and perceived risks to their online communications, how can we enable meaningful user choice and control around E2EE communications to address unwanted or abusive content?
Riana Pfefferkorn - Stanford Internet Observatory, Research Scholar
Jon Callas - EFF, Director of Technology Projects
Dhanaraj Thakur - CDT, Research Director
Kate D'Adamo - Reframe Health and Justice
Emma Llansó - CDT, Director of Free Expression Project