X's Moderation Dilemma: Free Speech or Harmful Content?
X , formerly known as Twitter, has shared its Global Transparency Report for the first half of 2024 https://lnkd.in/gSzPQitF, providing a look into how the platform moderates content and enforces its rules. The report reveals the challenges X faces in addressing harmful content while maintaining its philosophy of "freedom of speech, not freedom of reach."
The data shows that only 0.0123% of posts on the platform broke its rules—about one in every 10,000 posts. The most common violations involved hateful conduct, accounting for 0.0057% of posts, followed by abuse, harassment, and violent content. To address these issues, X uses a tiered enforcement system, often reducing the visibility of harmful posts by up to 85% rather than removing them entirely. For more serious violations, X removed or labeled over 10.7 million posts during the first half of the year.
When compared to its competitors, X’s moderation approach stands out as less aggressive. TikTok , for instance, removed 980 million comments in just the first quarter of 2024, representing 1.6% of all comments posted. TikTok s policies prioritize the swift removal of violating content, particularly in videos and live streams. Meanwhile, Meta , the parent company of Facebook and Instagram tracks the prevalence of harmful content, with Instagram reporting a hate speech prevalence rate of 0.025% and Facebook slightly higher at 0.2%. Snap Inc. , targeting a younger audience, removed 5.7 million posts in the second half of 2023, which accounted for 0.01% of total content views.
X relies on a combination of machine learning and human reviewers to handle the enormous volume of flagged content. Automated systems identify potential violations, which are then reviewed by moderators. Despite these efforts, the scale of flagged content is vast, with over 224 million user reports in the first half of 2024. Most of these reports involved abuse, harassment, or hateful behavior. Additionally, X has focused heavily on tackling spam and platform manipulation, suspending 464 million accounts during the same period. This aligns with Elon Musk’s emphasis on combating bots and spam since acquiring the platform in 2022.
The report highlights X ’s claim that its moderation practices are rooted in human rights principles, focusing on education and deterrence rather than outright censorship. However, critics argue that this approach is too lenient, particularly regarding hate speech. Recent policy changes, such as no longer treating deadnaming or misgendering as violations, have drawn condemnation from advocacy groups.
This transparency report comes as social media platforms face increasing scrutiny over their moderation practices. Many believe platforms must do more to balance free expression with public safety, especially as harmful content and misinformation continue to spread. While artificial intelligence plays a key role in moderation, experts point out its limitations, particularly in understanding cultural and linguistic nuances.
X ’s approach remains a work in progress, and its ability to strike the right balance between freedom of speech and responsible content moderation will likely remain a topic of debate in the future.