Political

Online hate groups sustain their messages by repeating powerful stories or routinely adding new allegations

Hate communities often flourish online for years, raising the question of how they persist. My research team has found that powerful stories keep members of a hate group galvanized, either by repeating the story over and over or by constantly adding fresh accusations and interpretations to it.

I’m a computational social scientist who studies social and political networks. My colleagues and I uncovered these trends by examining 10 years of posts, reactions and participation patterns in Facebook groups that shared antisemitic and Islamophobic content. Our findings have been accepted at the 2026 International Conference on Web and Social Media.

First, we measured who was posting and how that related to engagement on a site. Groups in which a small number of people produced most of the content tended to attract more reactions and responses. Then we looked at subjects the group members discussed – religion, immigration, geopolitics – and the kinds of stories members told about those topics, such as describing an entire group of people as criminals or warning that certain types of people are secretly taking over a country’s way of life.

When we put these pieces together, we discovered some clear patterns. Messages posted by a few very active people were strongly associated with higher site engagement in the form of likes and shares in the near term. And repetition – espousing the same ideas again and again – was an effective tactic. We also found that when many users kept adding fresh accusations, conspiracy theories and explanations, a group tended to persist. Very uniform content that used the same framing led to less engagement over time.

See also  When a president is unfit for office, here’s what the Constitution says can happen

Different communities seemed to be drawn to different messaging patterns. In Islamophobic groups, the most prolific posters tended to repeat a narrow, consistent set of messages. Often these were religiously framed posts that portrayed Muslims as morally condemned. In antisemitic groups, the most engaged members were more likely to impart a mix of narratives, from tales of victimization to conspiracy theories about public figures.

A woman wearing a headscarf and face mask holds a sign
A woman protests after a Kashmiri shawl seller was assaulted in India on Jan. 31, 2026.
NurPhoto via Getty Images

Why it matters

Our findings suggest that hate communities can sustain themselves in various ways, so efforts to moderate them should consider these variations. If a few voices drive the conversation, removing them could quiet the noise. If new stories constantly appear from many contributors, harmful ideas may survive even if a few key online accounts are taken down. Hate networks can persist even after social media platforms ban specific groups or accounts.

It is also important to understand how stories can make prejudice feel justified and emotionally compelling. Extremist stories may claim that a group is under attack, that outsiders are dangerous or subhuman, or that violence is the only way to stay safe. Groups seen as outsiders – such as immigrants – are common targets, and they may be described as an “invasion” that threatens the nation.

What other research is being done

Researchers are finding that extremist ideas are now spreading through looser networks where many voices contribute and messaging can vary widely. That could affect whether engagement in the future still depends on consistent repetition or novelty. Some investigators are also scrutinizing how harmful language, conspiracy theories and propaganda evolve over time.

See also  CIA agents successfully executed a plan for regime change in Iran in 1953 – but Trump hasn’t revealed any signs of a plan

What’s next

Another important direction is tracking how hate narratives are spread by public figures and influencers, how the narratives move between online platforms, and how they surface in offline groups and efforts to organize supporters, all of which can normalize harmful ideas. My group is starting to study how this amplification works: who shares which narratives and why, which kinds of people become bridges across different online platforms, and how those roles shape which messages spread.

The Research Brief is a short take on interesting academic work.


Source link

Back to top button