Online content moderation
Coordinator: Dr. Savvas Zannettou
Online content moderation is an important aspect of online social networks as it ensures that content posted by users abides by the platform’s guidelines and is appropriate for other users. To implement content moderation, platforms employ a human-machine collaborative system where moderators (both humans and machines) review content and take actions. Moderation actions can happen in various levels: on specific posts (i.e., removing posts that break the rules), on the user-level (i.e., banning users that consistently break the rules), and on the community-level (i.e., banning specific communities like subreddits or Facebook groups). Overall, due to the complexity and black-box nature of content moderation there are several open research avenues. First, it is unclear whether specific moderation strategies are effective. For instance, banning a harmful community might be beneficial for the platform, however, users may migrate to other communities/platforms and possibly become more radicalized/toxic. Second, since content moderation relies on both humans and machines, an important task is how to distribute the work between humans and machines to maximize the output of content moderation and minimize the mental toll on human moderators from continuous exposure to harmful content. Third, we lack an understanding on how content moderation is applied on social networks, how it changes/evolves over time, and whether moderators can abuse their power. In this research area, we aim to shed light to these research directions and contribute towards making better moderation decisions and designing more effective human-machine content moderation systems.
Projects/Papers
A Decade of Moderation: Analyzing the Evolution, Lifespan, and Toxicity of Moderator Communications on Reddit (under submission):
Authors: Savvas Zannettou, Krishna Gummadi (MPI-SWS), Shagun Jhaver (University of Washington)
Paper Link: N/A
Short desciption: In this paper, we investigate the evolution of moderation communications on Reddit, how prevalent is the use of bots for moderating content, and whether moderators spread toxic content in their public communications. Among other things, we find that content moderation is becoming more popular over time, bots play an increasingly important role in moderating online content, and we find evidence of a small percentage of posts that are toxic (shared by both humans and bots).
Does Platform Migration Compromise Content Moderation? Evidence from r/The_Donald and r/Incels (under submission):
Authors: Manoel Horta Ribeiro (EPFL), Shagun Jhaver (University of Washington), Savvas Zannettou, Jeremy Blackburn (Binghamton University), Emiliano De Cristofaro (UCL), Gianluca Stringhini (Boston University), Robert West (EPFL)
Paper Link: https://arxiv.org/abs/2010.10397
Short Description: Recently, there is a heated debate on how platforms should moderate content and whether deplatforming (i.e., banning users/communities) is actually effective. In this work, we focus on understanding the effectiveness of community-level moderation interventions (i.e., subreddit bans on Reddit) after users migrated to other standalone websites. We find that these interventions lead to a decrease in activity of posts and active users in the standalone website, while at the same time we find evidence that users are becoming more toxic and radicalized after the platform migration.