Content moderation is a critical yet controversial issue in digital communication. It seeks to balance the right to express opinions freely while ensuring that online spaces remain safe from harm. Governments, social media platforms, civil society, and users all play a role in shaping the policies that govern digital interactions. However, perspectives on content moderation vary widely across countries and cultures, making it a complex policy challenge.
Key Findings from the Research
The report, “CONTENT WARNING: Public Attitudes on Content Moderation and Freedom of Expression”, shows that global perspective on these issues are complex. The study underscores the challenge of maintaining both free speech and a secure online environment. Key takeaways include:
- Who Should Be Responsible for Content Moderation? There is no global consensus on who should enforce content rules. While social media platforms are often viewed as the primary regulators, public opinion varies significantly across nations, highlighting the need for transparent and culturally sensitive policies.
- Balancing Free Speech and Harm PreventionCountries like Sweden, Greece, the U.S., and Germany emphasize free expression, while South Africa, Brazil, and France lean towards stronger moderation to mitigate harm. Despite these differences, the majority of respondents favor a balanced approach.
- Normalization of Toxic Online BehaviorMany respondents perceive hate speech, discrimination, and online incivility as unavoidable. Personal experiences with verbal abuse, threats, and discrimination reinforce this concern.
- Preference for Moderation Over MisinformationWhen faced with a trade-off between freedom of expression and reducing misinformation, most respondents favor moderation. Countries like France, Germany, and Brazil express strong preferences for content control to curb fake news and harmful narratives.
© Content Moderation Lab at TUM Think Tank
The Impact of Unmoderated Platforms
Recent changes in content moderation policies on major platforms like Meta and X (formerly Twitter) have fueled public debate. While some companies adopt a laissez-faire approach, our findings suggest that most users want active moderation to limit hate speech and misinformation. The experience of platforms like X, where engagement and profitability have declined under minimal moderation, further supports the argument that users prefer a safer online environment.
© Content Moderation Lab at TUM Think Tank
Policy Implications
The study sends a clear message: While people value free speech, they also recognize the need for safeguards against harmful content. Free speech absolutism is a feature of a small minority of users almost everywhere in the world and policymakers and tech leaders must consider public attitudes when shaping content moderation strategies. The challenge is to develop policies that protect users from harm while preserving democratic values.
As debates around content moderation continue, our data show that the public wants digital platforms to be both free and fair. Striking the right balance between freedom of expression and user safety will be crucial for fostering inclusive and respectful online communities.
Study
CONTENT WARNING: Public Attitudes on Content Moderation and Freedom of Expression
Understanding the Harm of Toxic Content Disguised as Entertainment
The study’s findings align closely with the objectives of ToxicAInment, a bidt-funded project investigating the spread of toxic content on visual platforms like TikTok, YouTube, and Instagram. The two are closely linked in their examination of how toxic content spreads online and how the public perceives content moderation. The report highlights the normalization of toxic behavior, public concerns about misinformation, and the preference for stronger moderation to protect users, all of which align with ToxicAInment’s investigation into how harmful, extremist, and misleading content is disguised as entertainment on visual platforms like TikTok, YouTube, and Instagram. Both underscore the challenge of balancing free speech with harm prevention, emphasizing that while people value expression – especially humour and entertainment, they also recognize the dangers of unchecked harmful content.
ToxicAInment contributes to this discussion by providing empirical insights into how toxic content becomes more permissible and engaging when framed as entertainment, reinforcing the report’s findings about the public’s wishes for clearer moderation policies and greater platform accountability. By mapping and analyzing toxic entertainment, ToxicAInment helps explain the mechanisms that make harmful content appealing and widespread, offering valuable context to the report’s findings on public attitudes and the need for digital safety measures.
Research project (in German)
Einsatz von KI zur Erhöhung der Resilienz gegen Toxizität in der Online-Unterhaltung (ToxicAInment)
The blogs published by the bidt represent the views of the authors; they do not reflect the position of the Institute as a whole.
Der Beitrag Public Attitudes on Content Moderation and Freedom of Expression erschien zuerst auf bidt DE.