What is Section 230?
Last reviewed by Moderation API
Section 230 of the Communications Decency Act of 1996, codified at 47 U.S.C. Section 230, is a 26-word provision that shaped the entire American internet. The operative language reads: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
In plain English, platforms are not legally responsible for what their users post.
The two main protections
Section 230 actually does two distinct things, and conflating them is the source of most of the bad-faith debate around the law.
Subsection (c)(1), the publisher shield, prevents platforms from being treated as the publisher of third-party content. In practice that blocks defamation suits and most other tort claims that arise from user posts. Subsection (c)(2), the "Good Samaritan" clause, protects platforms from liability when they voluntarily restrict access to content they consider obscene, harassing, or otherwise objectionable, even when that content is constitutionally protected speech. Together, the two clauses let platforms host user-generated content at scale without having to pre-screen every post, and let them moderate aggressively without losing their immunity.
What Section 230 does not cover
The shield is broad but not absolute. It does not protect platforms from federal criminal law, intellectual property claims, the Electronic Communications Privacy Act, or, since the 2018 FOSTA-SESTA amendment, knowingly facilitating sex trafficking.
Platforms also remain liable for any content they themselves create or materially contribute to, which is why the line between hosting and authoring matters so much in litigation. If a platform edits a user's post in a way that changes its meaning, or prompts users to write something unlawful, 230 starts to wobble.
Section 230 is a US statute. It has no force in the EU, UK, or any other jurisdiction that imposes its own liability regime on intermediaries, which is why platforms often behave very differently in different markets.
The ongoing debate
Section 230 has become one of the most contested laws in American tech policy. Critics on the left argue it lets platforms ignore harmful content without legal consequence. Critics on the right argue it lets platforms moderate political speech without accountability. Both sides want to change the law, usually in opposite directions.
The Supreme Court declined to narrow Section 230 in Gonzalez v. Google (2023), but congressional proposals to amend or repeal it keep surfacing every session.
For Trust & Safety teams, the practical implication is direct: Section 230 is what makes aggressive content moderation legally viable in the United States. If it goes away or gets substantially rewritten, the entire operating model for US platforms changes with it.
