What is Cyberbullying?
Last reviewed by Moderation API
Cyberbullying is peer aggression that happens online. It is defined by repetition, a power imbalance, and intent to harm, which is what separates it from a one-off rude comment. It has become one of the most visible youth-safety issues driving platform policy, regulation, and parental worry, partly because the harm rarely stays in one app. A pile-on that starts on TikTok spills into group chats, then into school the next morning.
How widespread it is
Pew Research Center's 2022 report "Teens and Cyberbullying" found that 46% of U.S. teens ages 13 to 17 had experienced at least one of six forms of online abuse. Offensive name-calling was the most common at 32%, followed by false rumors at 22% and physical threats at 10%. The CDC's Youth Risk Behavior Surveillance System reports that about 16% of high school students had been electronically bullied in the prior 12 months.
Girls, LGBTQ+ youth, and teens from marginalized racial and ethnic groups are targeted at disproportionate rates. UNICEF has found similar patterns across more than 30 countries.
Common forms
Cyberbullying is not a single behavior but a cluster of related harms:
- Harassment: repeated insulting or threatening messages across DMs, comments, or replies.
- Exclusion: deliberately leaving someone out of group chats, gaming lobbies, or tagged posts to humiliate them.
- Outing and doxxing-adjacent behavior: revealing private information, sexual orientation, or personal photos without consent.
- Impersonation: creating fake accounts to post as the target or to embarrass them.
- Flaming and pile-ons: coordinated waves of hostile replies, often triggered by a viral post.
- Image-based abuse: sharing embarrassing, edited, or increasingly AI-generated deepfake images of a minor.
How platforms detect it
Keyword filters alone do not work. Individual messages often look benign in isolation, and the harm lives in the pattern over time.
Effective moderation combines several signals:
- Content classifiers trained on harassment, insult, and toxicity labels, such as those powering Jigsaw's Perspective API or commercial providers like Moderation API.
- Behavioral signals: repeated unsolicited DMs, rapid follow-unfollow cycles, and new accounts targeting the same user.
- Conversation-level analysis: evaluating threads rather than isolated posts, so escalation and pile-ons become visible.
- User-in-the-loop tools: Instagram's Restrict feature, YouTube's comment hold-for-review, and TikTok's bulk-block tools let potential targets intervene without confrontation.
Instagram has also rolled out nudge prompts that ask users to reconsider potentially offensive comments before posting. The company reports that a meaningful share of users edit or delete the comment when nudged.
Legal and regulatory context
In the United States, cyberbullying is addressed mostly at the state level. All 50 states have bullying laws, and most explicitly cover electronic harassment. Many require schools to investigate incidents that affect the school environment even when they happen off-campus. Federal efforts have focused on youth safety more broadly. The Kids Online Safety Act (KOSA), which advanced through Congress in 2024, would impose a duty of care on platforms likely to be accessed by minors, including mitigation of bullying and harassment. California's Age-Appropriate Design Code, the UK's Children's Code enforced by the ICO, and the child-safety provisions of the EU Digital Services Act all push platforms toward stronger defaults for minors. The UK Online Safety Act specifically names cyberbullying-adjacent content in its child-safety duties.
Resources and best practices for platforms
StopBullying.gov (run by the U.S. Department of Health and Human Services), the Cyberbullying Research Center, and the Crisis Text Line all publish research and victim support materials.
For platforms, the practices that tend to work are age-appropriate defaults for teen accounts, friction features like comment nudges and reply controls, one-tap reporting with fast response times, transparent appeals, and proactive outreach to users who show patterns consistent with victimization, for example a sudden drop in engagement combined with incoming hostile messages.
