What is CSAM?
Last reviewed by Moderation API
Child Sexual Abuse Material, or CSAM, is the legal and industry term for what was once called "child pornography." The terminology shift matters. The content is not pornography, it is documentation of a crime against a child, and the language used to describe it shapes how seriously institutions treat it.
CSAM is illegal in nearly every country and is the single highest-priority category for any Trust & Safety team. Mishandling it can expose a platform to criminal liability, not only regulatory fines.
Legal reporting obligations
In the United States, 18 U.S.C. Section 2258A requires any electronic service provider that obtains actual knowledge of apparent CSAM on its platform to report it to the CyberTipline operated by the National Center for Missing & Exploited Children (NCMEC). Reports must be filed as soon as reasonably possible. Providers must also preserve the content and associated metadata for 90 days so that law enforcement can investigate. Failing to report carries civil and criminal penalties.
The volume is staggering.
NCMEC received more than 36 million CyberTipline reports in 2023, with the overwhelming majority submitted by large online platforms. Similar reporting regimes exist in the EU, UK, Canada, and Australia. The EU is also debating expanded scanning obligations under its proposed CSA Regulation, which would push more of the detection burden onto providers.
How platforms detect CSAM
Detection usually combines two approaches. The first is hash matching against known CSAM databases. The most widely used hash tool is PhotoDNA, developed by Microsoft and licensed for free to qualifying platforms. NCMEC, the Internet Watch Foundation (IWF), and the Canadian Centre for Child Protection maintain additional hash lists that providers can subscribe to. Hash matching catches previously identified material with very high precision and almost no false positives, which is why it is the backbone of every mature pipeline.
The second approach uses machine learning classifiers to detect imagery that has never been seen before. This includes AI-generated CSAM, which has become a serious enforcement problem in its own right. Both detection streams feed a dedicated review pipeline staffed by specialized moderators with extensive wellness support.
Handling and chain of custody
Because CSAM is contraband, platforms cannot simply delete it and move on. The content has to be preserved in a legally defensible way, access must be tightly restricted, and every interaction with the material must be logged for chain-of-custody purposes. Moderators who review this material need trauma-informed support, rotation policies, and access to counseling. Exposure to CSAM is consistently linked with severe psychological harm in the moderation workforce, and the operational decisions around staffing this work reflect that reality. Many platforms route suspected CSAM to third-party specialists rather than building the full pipeline in-house.
The AI-generated problem
Generative AI has made it trivial to produce synthetic imagery depicting sexual abuse of minors.
Many jurisdictions, including the United States under the PROTECT Act, treat computer-generated CSAM as equally illegal. Detection is harder, though, because hash matching does not work on novel imagery that has never existed before. The Internet Watch Foundation reported a sharp rise in AI-generated CSAM on the open web across 2023 and 2024.
Platforms and hash-sharing coalitions have had to retrain classifiers on synthetic examples and speed up the cadence at which new hashes are ingested and distributed.
