Weaponized information refers to carefully crafted information that aims to deceive people or information that is presented in such a way to manipulate or attack users. Examples of weaponized information include the dissemination of hateful or divisive image-based memes by state-sponsored trolls (e.g., Russian trolls) and the creation and dissemination of deepfakes or cheapfakes videos. Usually, the dissemination of weaponized information is accompanied with the intent to manipulate or harm users. The dissemination of weaponized information at large scale can have devastating effects on the online and offline world. For instance, anecdotal evidence suggests that during the 2016 US elections, Russian state-sponsored trolls exploited social networks and disseminated weaponized information and that this activity likely affected voting preferences. Motivated by the importance and impact that weaponized information can have to society, in this line of research, we aim to understand, design techniques to automatically detect instances of weaponized information, and investigate possible mitigation strategies.
Characterizing the Use of Images in State-Sponsored Information Warfare Operations by Russian Trolls on Twitter (ICWSM 2020)
Authors: Savvas Zannettou, Tristan Caulfield (UCL), Barry Bradlyn (UIUC), Emiliano De Cristofaro (UCL), Gianluca Stringhini (Boston University), Jeremy Blackburn (Binghamton University)
Short description: State-sponsored trolls are individuals that are paid by the governments, they possess a set of online personas, and aim to push specific narratives on the Web. In this work, we focus on understanding the use of images by these actors on Twitter, by developing and using an image-processing pipeline. We find that image posting activity of Russian state-sponsored trolls coincides with real-world events and shed light on their targets as well as the content disseminated via images.
"And We Will Fight For Our Race!" A Measurement Study of Genetic Testing Conversations on Reddit and 4chan (ICWSM 2020)
Authors: Alexandros Mittos (UCL), Savvas Zannettou, Jeremy Blackburn (Binghamton University), Emiliano De Cristofaro (UCL)
In this work, we focus on understanding how genetic testing results are discussed on various social networks and whether they are weaponized (i.e., use genetic testing results to attack specific users). We find instances where genetic testing discourse is toxic and misogynistic on Reddit and 4chan, and that they include alt-right personalities and antisemitic rhetoric.
Disturbed YouTube for Kids: Characterizing and Detecting Inappropriate Videos Targeting Young Children (ICWSM 2020)
Authors: Kostantinos Papadamou (CUT), Antonis Papasavva (UCL), Savvas Zannettou, Jeremy Blackburn (Binghamton University), Nicolas Kourtellis (Telefonica), Ilias Leontiadis (Samsung), Gianluca Stringhini (Boston University), Michael Sirivianos (CUT)
Short description: Recently, extensive anecdotal evidence emerged indicating that YouTube’s recommendation algorithm is promoting videos that are harmful to young children. In this work, to shed light into this problem, we develop a deep learning classifier for detecting inappropriate videos targeting young children and assess whether the recommendation engine is promoting such videos.
Understanding the Use of Fauxtography on Social Media (under submission)
Authors: Yuping Wang (Boston University), Fatemeh Tahmasbi, Jeremy Blackburn (Binghamton University), Barry Bradlyn (UIUC), Emiliano De Cristofaro (UCL), David Magerman (DIfferential), Savvas Zannettou, Gianluca Stringhini (Boston University)
Short description: Fauxtography refers to news images that have been modified or miscaptioned to change their intent, often with the goal of spreading a false sense of the events they claim to depict. In this work, we study how fauxtography images are shared on Twitter, Reddit, and 4chan, with a particular focus on understanding the engagement that true and false images are receiving on each social network.