Modern-day online services are plagued by various kinds of harmful content, from hate speech to terrorist propaganda to depictions of the sexual abuse of children, to name just a few. In pressuring online service providers to better police harmful content on their services, regulators tend to focus on trust and safety techniques, such as automated systems for scanning or filtering content on a service, that depend on the provider’s capability to access the contents of users’ files and communications at will. We call these techniques content-dependent. The focus on content analysis overlooks the prevalence and utility of what this article calls content-oblivious techniques: ones that do not rely on guaranteed at-will access to content, such as metadata-based tools and users’ reports flagging abuse which the provider did not (or could not) detect on its own. This article presents the results of a survey about the trust and safety techniques employed by a group of online service providers, most of them communications services or services driven primarily by user-generated content.