Savvas Zannettou

Savvas Zannettou

Address
Max-Planck-Institut für Informatik
Saarland Informatics Campus
Campus E1 4
66123 Saarbrücken
Location
E1 4 - 504
Phone
+49 681 9325 3524
Fax
+49 681 9325 5719

Personal Information

I was born in Larnaca, Cyprus. I spent most of my life in Cyprus, however, during my studies, I travelled to several locations in Europe and the USA. In my free time I try to get out of the Silver division in League of Legends (mainly playing at the top lane with Jax), I like to watch various sports including Football (or soccer as my American friends call it), Basketball, and Tennis. Also, I regularly shitpost online, with my colleagues, and generate some high quality memes for fun/trolling. Furthermore, I lurk various Web communities including 4chan’s /pol/.

Please see my personal website for more information.

Publications

2020
Mittos, A., Zannettou, S., Blackburn, J., & De Cristofaro, E. (2020a). Analyzing Genetic Testing Discourse on the Web Through the Lens of Twitter, Reddit, and 4chan. ACM Transactions on the Web, 14(4). doi:10.1145/3404994
Export
BibTeX
@article{Mittos_2020, TITLE = {Analyzing Genetic Testing Discourse on the Web Through the Lens of {Twitter, Reddit, and 4chan}}, AUTHOR = {Mittos, Alexandros and Zannettou, Savvas and Blackburn, Jeremy and De Cristofaro, Emiliano}, LANGUAGE = {eng}, DOI = {10.1145/3404994}, PUBLISHER = {ACM}, ADDRESS = {New York, NY}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, JOURNAL = {ACM Transactions on the Web}, VOLUME = {14}, NUMBER = {4}, EID = {17}, }
Endnote
%0 Journal Article %A Mittos, Alexandros %A Zannettou, Savvas %A Blackburn, Jeremy %A De Cristofaro, Emiliano %+ External Organizations Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations %T Analyzing Genetic Testing Discourse on the Web Through the Lens of Twitter, Reddit, and 4chan : %G eng %U http://hdl.handle.net/21.11116/0000-0007-72E1-A %R 10.1145/3404994 %7 2020 %D 2020 %J ACM Transactions on the Web %V 14 %N 4 %Z sequence number: 17 %I ACM %C New York, NY
Hoseini, M., Melo, P., Júnior, M., Benevenuto, F., Chandrasekaran, B., Feldmann, A., & Zannettou, S. (2020). Demystifying the Messaging Platforms’ Ecosystem Through the Lens of Twitter. In IMC ’20, 20th ACM Internet Measurement Conference. Virtual Event, USA: ACM. doi:10.1145/3419394.3423651
Export
BibTeX
@inproceedings{Hoseini_IMC2020, TITLE = {Demystifying the Messaging Platforms' Ecosystem Through the Lens of {Twitter}}, AUTHOR = {Hoseini, Mohamad and Melo, Philipe and J{\'u}nior, Manoel and Benevenuto, Fabr{\'i}cio and Chandrasekaran, Balakrishnan and Feldmann, Anja and Zannettou, Savvas}, LANGUAGE = {eng}, ISBN = {9-781-4503-8138-3}, DOI = {10.1145/3419394.3423651}, PUBLISHER = {ACM}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, DATE = {2020}, BOOKTITLE = {IMC '20, 20th ACM Internet Measurement Conference}, PAGES = {345--359}, ADDRESS = {Virtual Event, USA}, }
Endnote
%0 Conference Proceedings %A Hoseini, Mohamad %A Melo, Philipe %A Júnior, Manoel %A Benevenuto, Fabrício %A Chandrasekaran, Balakrishnan %A Feldmann, Anja %A Zannettou, Savvas %+ Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations Internet Architecture, MPI for Informatics, Max Planck Society Internet Architecture, MPI for Informatics, Max Planck Society Internet Architecture, MPI for Informatics, Max Planck Society %T Demystifying the Messaging Platforms' Ecosystem Through the Lens of Twitter : %G eng %U http://hdl.handle.net/21.11116/0000-0007-8542-8 %R 10.1145/3419394.3423651 %D 2020 %B 20th ACM Internet Measurement Conference %Z date of event: 2020-10-27 - 2020-10-29 %C Virtual Event, USA %B IMC '20 %P 345 - 359 %I ACM %@ 9-781-4503-8138-3
Mittos, A., Zannettou, S., Blackburn, J., & De Cristofaro, E. (2020b). “And We Will Fight for Our Race!” A Measurement Study of Genetic Testing Conversations on Reddit and 4chan. In Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020) (pp. 452–463). Atlanta, GA, USA (Virtual Event): AAAI. Retrieved from https://ojs.aaai.org//index.php/ICWSM/article/view/7314
Export
BibTeX
@inproceedings{Mittos_ICWSM2020, TITLE = {"{A}nd We Will Fight for Our Race!" {A} Measurement Study of Genetic Testing Conversations on {R}eddit and 4chan}, AUTHOR = {Mittos, Alexandros and Zannettou, Savvas and Blackburn, Jeremy and De Cristofaro, Emiliano}, LANGUAGE = {eng}, ISBN = {978-1-57735-823-7}, URL = {https://ojs.aaai.org//index.php/ICWSM/article/view/7314}, PUBLISHER = {AAAI}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020)}, PAGES = {452--463}, ADDRESS = {Atlanta, GA, USA (Virtual Event)}, }
Endnote
%0 Conference Proceedings %A Mittos, Alexandros %A Zannettou, Savvas %A Blackburn, Jeremy %A De Cristofaro, Emiliano %+ External Organizations Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations %T "And We Will Fight for Our Race!" A Measurement Study of Genetic Testing Conversations on Reddit and 4chan : %G eng %U http://hdl.handle.net/21.11116/0000-0007-859A-5 %U https://ojs.aaai.org//index.php/ICWSM/article/view/7314 %D 2020 %B 14th International Conference on Web and Social Media %Z date of event: 2020-06-08 - 2020-06-11 %C Atlanta, GA, USA (Virtual Event) %B Proceedings of the Fourteenth International Conference on Web and Social Media %P 452 - 463 %I AAAI %@ 978-1-57735-823-7
Papadamou, K., Papasavva, A., Zannettou, S., Blackburn, J., Kourtellis, N., Leontiadis, I., … Sirivianos, M. (2020). Disturbed YouTube for Kids: Characterizing and Detecting Inappropriate Videos Targeting Young Children. In Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020). Atlanta, GA, USA (Virtual Event): AAAI. Retrieved from https://ojs.aaai.org//index.php/ICWSM/article/view/7320
Export
BibTeX
@inproceedings{Papadamou_ICWSM2020, TITLE = {Disturbed {YouTube} for Kids: Characterizing and Detecting Inappropriate Videos Targeting Young Children}, AUTHOR = {Papadamou, Kostantinos and Papasavva, Antonis and Zannettou, Savvas and Blackburn, Jeremy and Kourtellis, Nicolas and Leontiadis, Ilias and Stringhini, Gianluca and Sirivianos, Michael}, LANGUAGE = {eng}, ISBN = {978-1-57735-823-7}, URL = {https://ojs.aaai.org//index.php/ICWSM/article/view/7320}, PUBLISHER = {AAAI}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020)}, PAGES = {522--533}, ADDRESS = {Atlanta, GA, USA (Virtual Event)}, }
Endnote
%0 Conference Proceedings %A Papadamou, Kostantinos %A Papasavva, Antonis %A Zannettou, Savvas %A Blackburn, Jeremy %A Kourtellis, Nicolas %A Leontiadis, Ilias %A Stringhini, Gianluca %A Sirivianos, Michael %+ External Organizations External Organizations Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations %T Disturbed YouTube for Kids: Characterizing and Detecting Inappropriate Videos Targeting Young Children : %G eng %U http://hdl.handle.net/21.11116/0000-0007-85A4-9 %U https://ojs.aaai.org//index.php/ICWSM/article/view/7320 %D 2020 %B 14th International Conference on Web and Social Media %Z date of event: 2020-06-08 - 2020-06-11 %C Atlanta, GA, USA (Virtual Event) %B Proceedings of the Fourteenth International Conference on Web and Social Media %P 522 - 533 %I AAAI %@ 978-1-57735-823-7
Zannettou, S., Caulfield, T., Bradlyn, B., Cristofaro, E. D., Stringhini, G., & Blackburn, J. (2020). Characterizing the Use of Images in State-Sponsored Information Warfare Operations by Russian Trolls on Twitter. In Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020). Atlanta, GA, USA (Virtual Event): AAAI. Retrieved from https://ojs.aaai.org//index.php/ICWSM/article/view/7342
Export
BibTeX
@inproceedings{Zannettou_ICWSM2020, TITLE = {Characterizing the Use of Images in State-Sponsored Information Warfare Operations by {R}ussian Trolls on {Twitter}}, AUTHOR = {Zannettou, Savvas and Caulfield, Tristan and Bradlyn, Barry and Cristofaro, Emiliano De and Stringhini, Gianluca and Blackburn, Jeremy}, LANGUAGE = {eng}, ISBN = {978-1-57735-823-7}, URL = {https://ojs.aaai.org//index.php/ICWSM/article/view/7342}, PUBLISHER = {AAAI}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020)}, PAGES = {774--785}, ADDRESS = {Atlanta, GA, USA (Virtual Event)}, }
Endnote
%0 Conference Proceedings %A Zannettou, Savvas %A Caulfield, Tristan %A Bradlyn, Barry %A Cristofaro, Emiliano De %A Stringhini, Gianluca %A Blackburn, Jeremy %+ Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations %T Characterizing the Use of Images in State-Sponsored Information Warfare Operations by Russian Trolls on Twitter : %G eng %U http://hdl.handle.net/21.11116/0000-0007-85C1-8 %U https://ojs.aaai.org//index.php/ICWSM/article/view/7342 %D 2020 %B 14th International Conference on Web and Social Media %Z date of event: 2020-06-08 - 2020-06-11 %C Atlanta, GA, USA (Virtual Event) %B Proceedings of the Fourteenth International Conference on Web and Social Media %P 774 - 785 %I AAAI %@ 978-1-57735-823-7
Zannettou, S., Finkelstein, J., Bradlyn, B., & Blackburn, J. (2020). A Quantitative Approach to Understanding Online Antisemitism. In Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020). Atlanta, GA, USA (Virtual Event): AAAI. Retrieved from https://ojs.aaai.org//index.php/ICWSM/article/view/7343
Export
BibTeX
@inproceedings{Zannettou_ICWSM2020b, TITLE = {A Quantitative Approach to Understanding Online Antisemitism}, AUTHOR = {Zannettou, Savvas and Finkelstein, Joel and Bradlyn, Barry and Blackburn, Jeremy}, LANGUAGE = {eng}, ISBN = {978-1-57735-823-7}, URL = {https://ojs.aaai.org//index.php/ICWSM/article/view/7343}, PUBLISHER = {AAAI}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020)}, PAGES = {786--797}, ADDRESS = {Atlanta, GA, USA (Virtual Event)}, }
Endnote
%0 Conference Proceedings %A Zannettou, Savvas %A Finkelstein, Joel %A Bradlyn, Barry %A Blackburn, Jeremy %+ Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T A Quantitative Approach to Understanding Online Antisemitism : %G eng %U http://hdl.handle.net/21.11116/0000-0007-878F-0 %U https://ojs.aaai.org//index.php/ICWSM/article/view/7343 %D 2020 %B 14th International Conference on Web and Social Media %Z date of event: 2020-06-08 - 2020-06-11 %C Atlanta, GA, USA (Virtual Event) %B Proceedings of the Fourteenth International Conference on Web and Social Media %P 786 - 797 %I AAAI %@ 978-1-57735-823-7
Baumgartner, J., Zannettou, S., Keegan, B., Squire, M., & Blackburn, J. (2020). The Pushshift Reddit Dataset. In Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020). Atlanta, GA, USA (Virtual Event): AAAI. Retrieved from https://ojs.aaai.org//index.php/ICWSM/article/view/7347
Export
BibTeX
@inproceedings{Baumgartner_ICWSM2020b, TITLE = {The {Pushshift} {Reddit} Dataset}, AUTHOR = {Baumgartner, Jason and Zannettou, Savvas and Keegan, Brian and Squire, Megan and Blackburn, Jeremy}, LANGUAGE = {eng}, ISBN = {978-1-57735-823-7}, URL = {https://ojs.aaai.org//index.php/ICWSM/article/view/7347}, PUBLISHER = {AAAI}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020)}, PAGES = {830--839}, ADDRESS = {Atlanta, GA, USA (Virtual Event)}, }
Endnote
%0 Conference Proceedings %A Baumgartner, Jason %A Zannettou, Savvas %A Keegan, Brian %A Squire, Megan %A Blackburn, Jeremy %+ External Organizations Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T The Pushshift Reddit Dataset : %G eng %U http://hdl.handle.net/21.11116/0000-0007-8792-B %U https://ojs.aaai.org//index.php/ICWSM/article/view/7347 %D 2020 %B 14th International Conference on Web and Social Media %Z date of event: 2020-06-08 - 2020-06-11 %C Atlanta, GA, USA (Virtual Event) %B Proceedings of the Fourteenth International Conference on Web and Social Media %P 830 - 839 %I AAAI %@ 978-1-57735-823-7
Zannettou, S., Elsherief, M., Belding, E., Nilizadeh, S., & Stringhini, G. (2020). Measuring and Characterizing Hate Speech on News Websites. In WebSci ’20, 12th ACM Conference on Web Science. Southampton, UK (Online): ACM. doi:10.1145/3394231.3397902
Export
BibTeX
@inproceedings{Zannettou_WebSci2020, TITLE = {Measuring and Characterizing Hate Speech on News Websites}, AUTHOR = {Zannettou, Savvas and Elsherief, Mai and Belding, Elizabeth and Nilizadeh, Shirin and Stringhini, Gianluca}, LANGUAGE = {eng}, ISBN = {9781450379892}, DOI = {10.1145/3394231.3397902}, PUBLISHER = {ACM}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {WebSci '20, 12th ACM Conference on Web Science}, PAGES = {125--134}, ADDRESS = {Southampton, UK (Online)}, }
Endnote
%0 Conference Proceedings %A Zannettou, Savvas %A Elsherief, Mai %A Belding, Elizabeth %A Nilizadeh, Shirin %A Stringhini, Gianluca %+ Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Measuring and Characterizing Hate Speech on News Websites : %G eng %U http://hdl.handle.net/21.11116/0000-0007-87AF-C %R 10.1145/3394231.3397902 %D 2020 %B 12th ACM Conference on Web Science %Z date of event: 2020-07-06 - 2020-07-10 %C Southampton, UK (Online) %B WebSci '20 %P 125 - 134 %I ACM %@ 9781450379892
Papasavva, A., Zannettou, S., De Cristofaro, E., Stringhini, G., & Blackburn, J. (2020). Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board. In Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020). Atlanta, GA, USA (Virtual Event): AAAI. Retrieved from https://ojs.aaai.org//index.php/ICWSM/article/view/7354
Export
BibTeX
@inproceedings{Papasavva_ICWSM2020, TITLE = {Raiders of the Lost {Kek}: 3.5 Years of Augmented 4chan Posts from the {Politically Incorrect} Board}, AUTHOR = {Papasavva, Antonis and Zannettou, Savvas and De Cristofaro, Emiliano and Stringhini, Gianluca and Blackburn, Jeremy}, LANGUAGE = {eng}, ISBN = {978-1-57735-823-7}, URL = {https://ojs.aaai.org//index.php/ICWSM/article/view/7354}, PUBLISHER = {AAAI}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020)}, PAGES = {885--894}, ADDRESS = {Atlanta, GA, USA (Virtual Event)}, }
Endnote
%0 Conference Proceedings %A Papasavva, Antonis %A Zannettou, Savvas %A De Cristofaro, Emiliano %A Stringhini, Gianluca %A Blackburn, Jeremy %+ External Organizations Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations %T Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board : %G eng %U http://hdl.handle.net/21.11116/0000-0007-87A0-B %U https://ojs.aaai.org//index.php/ICWSM/article/view/7354 %D 2020 %B 14th International Conference on Web and Social Media %Z date of event: 2020-06-08 - 2020-06-11 %C Atlanta, GA, USA (Virtual Event) %B Proceedings of the Fourteenth International Conference on Web and Social Media %P 885 - 894 %I AAAI %@ 978-1-57735-823-7
Baumgartner, J., Zannettou, S., Squire, M., & Blackburn, J. (2020). The Pushshift Telegram Dataset. In Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020). Atlanta, GA, USA (Virtual Event): AAAI. Retrieved from https://ojs.aaai.org//index.php/ICWSM/article/view/7348
Export
BibTeX
@inproceedings{Baumgartner_ICWSM2020c, TITLE = {The {Pushshift} {Telegram} Dataset}, AUTHOR = {Baumgartner, Jason and Zannettou, Savvas and Squire, Megan and Blackburn, Jeremy}, LANGUAGE = {eng}, ISBN = {978-1-57735-823-7}, URL = {https://ojs.aaai.org//index.php/ICWSM/article/view/7348}, PUBLISHER = {AAAI}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {Proceedings of the Fourteenth International Conference on Web and Social Media (ICWSM 2020)}, PAGES = {840--847}, ADDRESS = {Atlanta, GA, USA (Virtual Event)}, }
Endnote
%0 Conference Proceedings %A Baumgartner, Jason %A Zannettou, Savvas %A Squire, Megan %A Blackburn, Jeremy %+ External Organizations Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations %T The Pushshift Telegram Dataset : %G eng %U http://hdl.handle.net/21.11116/0000-0007-879A-3 %U https://ojs.aaai.org//index.php/ICWSM/article/view/7348 %D 2020 %B 14th International Conference on Web and Social Media %Z date of event: 2020-06-08 - 2020-06-11 %C Atlanta, GA, USA (Virtual Event) %B Proceedings of the Fourteenth International Conference on Web and Social Media %P 840 - 847 %I AAAI %@ 978-1-57735-823-7
Horta Ribeiro, M., Blackburn, J., Bradlyn, B., De Cristofaro, E., Stringhini, G., Long, S., … Zannettou, S. (2020). The Evolution of the Manosphere Across the Web. Retrieved from https://arxiv.org/abs/2001.07600
(arXiv: 2001.07600)
Abstract
In this paper, we present a large-scale characterization of the Manosphere, a conglomerate of Web-based misogynist movements roughly focused on "men's issues," which has seen significant growth over the past years. We do so by gathering and analyzing 28.8M posts from 6 forums and 51 subreddits. Overall, we paint a comprehensive picture of the evolution of the Manosphere on the Web, showing the links between its different communities over the years. We find that milder and older communities, such as Pick Up Artists and Men's Rights Activists, are giving way to more extremist ones like Incels and Men Going Their Own Way, with a substantial migration of active users. Moreover, our analysis suggests that these newer communities are more toxic and misogynistic than the former.
Export
BibTeX
@online{HortaRibeiro2020, TITLE = {The Evolution of the {Manosphere} Across the {Web}}, AUTHOR = {Horta Ribeiro, Manoel and Blackburn, Jeremy and Bradlyn, Barry and De Cristofaro, Emiliano and Stringhini, Gianluca and Long, Summer and Greenberg, Stephanie and Zannettou, Savvas}, LANGUAGE = {eng}, URL = {https://arxiv.org/abs/2001.07600}, EPRINT = {2001.07600}, EPRINTTYPE = {arXiv}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, ABSTRACT = {In this paper, we present a large-scale characterization of the Manosphere, a conglomerate of Web-based misogynist movements roughly focused on "men's issues," which has seen significant growth over the past years. We do so by gathering and analyzing 28.8M posts from 6 forums and 51 subreddits. Overall, we paint a comprehensive picture of the evolution of the Manosphere on the Web, showing the links between its different communities over the years. We find that milder and older communities, such as Pick Up Artists and Men's Rights Activists, are giving way to more extremist ones like Incels and Men Going Their Own Way, with a substantial migration of active users. Moreover, our analysis suggests that these newer communities are more toxic and misogynistic than the former.}, }
Endnote
%0 Report %A Horta Ribeiro, Manoel %A Blackburn, Jeremy %A Bradlyn, Barry %A De Cristofaro, Emiliano %A Stringhini, Gianluca %A Long, Summer %A Greenberg, Stephanie %A Zannettou, Savvas %+ External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations Internet Architecture, MPI for Informatics, Max Planck Society %T The Evolution of the Manosphere Across the Web : %G eng %U http://hdl.handle.net/21.11116/0000-0007-89A7-2 %U https://arxiv.org/abs/2001.07600 %D 2020 %X In this paper, we present a large-scale characterization of the Manosphere, a conglomerate of Web-based misogynist movements roughly focused on "men's issues," which has seen significant growth over the past years. We do so by gathering and analyzing 28.8M posts from 6 forums and 51 subreddits. Overall, we paint a comprehensive picture of the evolution of the Manosphere on the Web, showing the links between its different communities over the years. We find that milder and older communities, such as Pick Up Artists and Men's Rights Activists, are giving way to more extremist ones like Incels and Men Going Their Own Way, with a substantial migration of active users. Moreover, our analysis suggests that these newer communities are more toxic and misogynistic than the former. %K Computer Science, Computers and Society, cs.CY
Papadamou, K., Zannettou, S., Blackburn, J., De Cristofaro, E., Stringhini, G., & Sirivianos, M. (2020a). Understanding the Incel Community on YouTube. Retrieved from https://arxiv.org/abs/2001.08293
(arXiv: 2001.08293)
Abstract
YouTube is by far the largest host of user-generated video content worldwide. Alas, the platform also hosts inappropriate, toxic, and/or hateful content. One community that has come into the spotlight for sharing and publishing hateful content are the so-called Involuntary Celibates (Incels), a loosely defined movement ostensibly focusing on men's issues, who have often been linked to misogynistic views. In this paper, we set out to analyze the Incel community on YouTube by focusing on the evolution of this community over the last decade and understanding whether YouTube's recommendation algorithm steers users towards Incel-related videos. We collect videos shared on Incel-related communities within Reddit, and perform a data-driven characterization of the content posted on YouTube. Among other things, we find that the Incel community on YouTube is getting traction and that during the last decade the number of Incel-related videos and comments rose substantially. Also, we quantify the probability that a user will encounter an Incel-related video by virtue of YouTube's recommendation algorithm. Within five hops when starting from a non-Incel-related video, this probability is 1 in 5, which is alarmingly high as such content is likely to share toxic and misogynistic views.
Export
BibTeX
@online{Papadamou_arXiv2001.08293, TITLE = {Understanding the {Incel} Community on {YouTube}}, AUTHOR = {Papadamou, Kostantinos and Zannettou, Savvas and Blackburn, Jeremy and De Cristofaro, Emiliano and Stringhini, Gianluca and Sirivianos, Michael}, LANGUAGE = {eng}, URL = {https://arxiv.org/abs/2001.08293}, EPRINT = {2001.08293}, EPRINTTYPE = {arXiv}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, ABSTRACT = {YouTube is by far the largest host of user-generated video content worldwide. Alas, the platform also hosts inappropriate, toxic, and/or hateful content. One community that has come into the spotlight for sharing and publishing hateful content are the so-called Involuntary Celibates (Incels), a loosely defined movement ostensibly focusing on men's issues, who have often been linked to misogynistic views. In this paper, we set out to analyze the Incel community on YouTube by focusing on the evolution of this community over the last decade and understanding whether YouTube's recommendation algorithm steers users towards Incel-related videos. We collect videos shared on Incel-related communities within Reddit, and perform a data-driven characterization of the content posted on YouTube. Among other things, we find that the Incel community on YouTube is getting traction and that during the last decade the number of Incel-related videos and comments rose substantially. Also, we quantify the probability that a user will encounter an Incel-related video by virtue of YouTube's recommendation algorithm. Within five hops when starting from a non-Incel-related video, this probability is 1 in 5, which is alarmingly high as such content is likely to share toxic and misogynistic views.}, }
Endnote
%0 Report %A Papadamou, Kostantinos %A Zannettou, Savvas %A Blackburn, Jeremy %A De Cristofaro, Emiliano %A Stringhini, Gianluca %A Sirivianos, Michael %+ External Organizations Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Understanding the Incel Community on YouTube : %G eng %U http://hdl.handle.net/21.11116/0000-0007-89B3-4 %U https://arxiv.org/abs/2001.08293 %D 2020 %X YouTube is by far the largest host of user-generated video content worldwide. Alas, the platform also hosts inappropriate, toxic, and/or hateful content. One community that has come into the spotlight for sharing and publishing hateful content are the so-called Involuntary Celibates (Incels), a loosely defined movement ostensibly focusing on men's issues, who have often been linked to misogynistic views. In this paper, we set out to analyze the Incel community on YouTube by focusing on the evolution of this community over the last decade and understanding whether YouTube's recommendation algorithm steers users towards Incel-related videos. We collect videos shared on Incel-related communities within Reddit, and perform a data-driven characterization of the content posted on YouTube. Among other things, we find that the Incel community on YouTube is getting traction and that during the last decade the number of Incel-related videos and comments rose substantially. Also, we quantify the probability that a user will encounter an Incel-related video by virtue of YouTube's recommendation algorithm. Within five hops when starting from a non-Incel-related video, this probability is 1 in 5, which is alarmingly high as such content is likely to share toxic and misogynistic views. %K Computer Science, Computers and Society, cs.CY
Schild, L., Ling, C., Blackburn, J., Stringhini, G., Zhang, Y., & Zannettou, S. (2020). “Go eat a bat, Chang!”: An Early Look on the Emergence of Sinophobic Behavior on Web Communities in the Face of COVID-19. Retrieved from https://arxiv.org/abs/2004.04046
(arXiv: 2004.04046)
Abstract
The outbreak of the COVID-19 pandemic has changed our lives in unprecedented ways. In the face of the projected catastrophic consequences, many countries have enacted social distancing measures in an attempt to limit the spread of the virus. Under these conditions, the Web has become an indispensable medium for information acquisition, communication, and entertainment. At the same time, unfortunately, the Web is being exploited for the dissemination of potentially harmful and disturbing content, such as the spread of conspiracy theories and hateful speech towards specific ethnic groups, in particular towards Chinese people since COVID-19 is believed to have originated from China. In this paper, we make a first attempt to study the emergence of Sinophobic behavior on the Web during the outbreak of the COVID-19 pandemic. We collect two large-scale datasets from Twitter and 4chan's Politically Incorrect board (/pol/) over a time period of approximately five months and analyze them to investigate whether there is a rise or important differences with regard to the dissemination of Sinophobic content. We find that COVID-19 indeed drives the rise of Sinophobia on the Web and that the dissemination of Sinophobic content is a cross-platform phenomenon: it exists on fringe Web communities like \dspol, and to a lesser extent on mainstream ones like Twitter. Also, using word embeddings over time, we characterize the evolution and emergence of new Sinophobic slurs on both Twitter and /pol/. Finally, we find interesting differences in the context in which words related to Chinese people are used on the Web before and after the COVID-19 outbreak: on Twitter we observe a shift towards blaming China for the situation, while on /pol/ we find a shift towards using more (and new) Sinophobic slurs.
Export
BibTeX
@online{Schild_arXiv2004.04046, TITLE = {"Go eat a bat, {Chang!}": {A}n Early Look on the Emergence of Sinophobic Behavior on {Web} Communities in the Face of {COVID}-19}, AUTHOR = {Schild, Leonard and Ling, Chen and Blackburn, Jeremy and Stringhini, Gianluca and Zhang, Yang and Zannettou, Savvas}, LANGUAGE = {eng}, URL = {https://arxiv.org/abs/2004.04046}, EPRINT = {2004.04046}, EPRINTTYPE = {arXiv}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, ABSTRACT = {The outbreak of the COVID-19 pandemic has changed our lives in unprecedented ways. In the face of the projected catastrophic consequences, many countries have enacted social distancing measures in an attempt to limit the spread of the virus. Under these conditions, the Web has become an indispensable medium for information acquisition, communication, and entertainment. At the same time, unfortunately, the Web is being exploited for the dissemination of potentially harmful and disturbing content, such as the spread of conspiracy theories and hateful speech towards specific ethnic groups, in particular towards Chinese people since COVID-19 is believed to have originated from China. In this paper, we make a first attempt to study the emergence of Sinophobic behavior on the Web during the outbreak of the COVID-19 pandemic. We collect two large-scale datasets from Twitter and 4chan's Politically Incorrect board (/pol/) over a time period of approximately five months and analyze them to investigate whether there is a rise or important differences with regard to the dissemination of Sinophobic content. We find that COVID-19 indeed drives the rise of Sinophobia on the Web and that the dissemination of Sinophobic content is a cross-platform phenomenon: it exists on fringe Web communities like \dspol, and to a lesser extent on mainstream ones like Twitter. Also, using word embeddings over time, we characterize the evolution and emergence of new Sinophobic slurs on both Twitter and /pol/. Finally, we find interesting differences in the context in which words related to Chinese people are used on the Web before and after the COVID-19 outbreak: on Twitter we observe a shift towards blaming China for the situation, while on /pol/ we find a shift towards using more (and new) Sinophobic slurs.}, }
Endnote
%0 Report %A Schild, Leonard %A Ling, Chen %A Blackburn, Jeremy %A Stringhini, Gianluca %A Zhang, Yang %A Zannettou, Savvas %+ External Organizations External Organizations External Organizations External Organizations External Organizations Internet Architecture, MPI for Informatics, Max Planck Society %T "Go eat a bat, Chang!": An Early Look on the Emergence of Sinophobic Behavior on Web Communities in the Face of COVID-19 : %G eng %U http://hdl.handle.net/21.11116/0000-0007-89C5-0 %U https://arxiv.org/abs/2004.04046 %D 2020 %X The outbreak of the COVID-19 pandemic has changed our lives in unprecedented ways. In the face of the projected catastrophic consequences, many countries have enacted social distancing measures in an attempt to limit the spread of the virus. Under these conditions, the Web has become an indispensable medium for information acquisition, communication, and entertainment. At the same time, unfortunately, the Web is being exploited for the dissemination of potentially harmful and disturbing content, such as the spread of conspiracy theories and hateful speech towards specific ethnic groups, in particular towards Chinese people since COVID-19 is believed to have originated from China. In this paper, we make a first attempt to study the emergence of Sinophobic behavior on the Web during the outbreak of the COVID-19 pandemic. We collect two large-scale datasets from Twitter and 4chan's Politically Incorrect board (/pol/) over a time period of approximately five months and analyze them to investigate whether there is a rise or important differences with regard to the dissemination of Sinophobic content. We find that COVID-19 indeed drives the rise of Sinophobia on the Web and that the dissemination of Sinophobic content is a cross-platform phenomenon: it exists on fringe Web communities like \dspol, and to a lesser extent on mainstream ones like Twitter. Also, using word embeddings over time, we characterize the evolution and emergence of new Sinophobic slurs on both Twitter and /pol/. Finally, we find interesting differences in the context in which words related to Chinese people are used on the Web before and after the COVID-19 outbreak: on Twitter we observe a shift towards blaming China for the situation, while on /pol/ we find a shift towards using more (and new) Sinophobic slurs. %K cs.SI,Computer Science, Computers and Society, cs.CY
Papasavva, A., Blackburn, J., Stringhini, G., Zannettou, S., & De Cristofaro, E. (2020). “Is it a Qoincidence?”: A First Step Towards Understanding and Characterizing the QAnon Movement on Voat.co. Retrieved from https://arxiv.org/abs/2009.04885
(arXiv: 2009.04885)
Abstract
Online fringe communities offer fertile grounds for users to seek and share paranoid ideas fueling suspicion of mainstream news, and outright conspiracy theories. Among these, the QAnon conspiracy theory has emerged in 2017 on 4chan, broadly supporting the idea that powerful politicians, aristocrats, and celebrities are closely engaged in a global pedophile ring. At the same time, governments are thought to be controlled by "puppet masters," as democratically elected officials serve as a fake showroom of democracy. In this paper, we provide an empirical exploratory analysis of the QAnon community on Voat.co, a Reddit-esque news aggregator, which has recently captured the interest of the press for its toxicity and for providing a platform to QAnon followers. More precisely, we analyze a large dataset from /v/GreatAwakening, the most popular QAnon-related subverse (the Voat equivalent of a subreddit) to characterize activity and user engagement. To further understand the discourse around QAnon, we study the most popular named entities mentioned in the posts, along with the most prominent topics of discussion, which focus on US politics, Donald Trump, and world events. We also use word2vec models to identify narratives around QAnon-specific keywords, and our graph visualization shows that some of QAnon-related ones are closely related to those from the Pizzagate conspiracy theory and "drops" by "Q." Finally, we analyze content toxicity, finding that discussions on /v/GreatAwakening are less toxic than in the broad Voat community.
Export
BibTeX
@online{Papasavva_arXiv2009.04885, TITLE = {"Is it a {Qoincidence}?": {A} First Step Towards Understanding and Characterizing the {QAnon} Movement on {Voat.co}}, AUTHOR = {Papasavva, Antonis and Blackburn, Jeremy and Stringhini, Gianluca and Zannettou, Savvas and De Cristofaro, Emiliano}, LANGUAGE = {eng}, URL = {https://arxiv.org/abs/2009.04885}, EPRINT = {2009.04885}, EPRINTTYPE = {arXiv}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Online fringe communities offer fertile grounds for users to seek and share paranoid ideas fueling suspicion of mainstream news, and outright conspiracy theories. Among these, the QAnon conspiracy theory has emerged in 2017 on 4chan, broadly supporting the idea that powerful politicians, aristocrats, and celebrities are closely engaged in a global pedophile ring. At the same time, governments are thought to be controlled by "puppet masters," as democratically elected officials serve as a fake showroom of democracy. In this paper, we provide an empirical exploratory analysis of the QAnon community on Voat.co, a Reddit-esque news aggregator, which has recently captured the interest of the press for its toxicity and for providing a platform to QAnon followers. More precisely, we analyze a large dataset from /v/GreatAwakening, the most popular QAnon-related subverse (the Voat equivalent of a subreddit) to characterize activity and user engagement. To further understand the discourse around QAnon, we study the most popular named entities mentioned in the posts, along with the most prominent topics of discussion, which focus on US politics, Donald Trump, and world events. We also use word2vec models to identify narratives around QAnon-specific keywords, and our graph visualization shows that some of QAnon-related ones are closely related to those from the Pizzagate conspiracy theory and "drops" by "Q." Finally, we analyze content toxicity, finding that discussions on /v/GreatAwakening are less toxic than in the broad Voat community.}, }
Endnote
%0 Report %A Papasavva, Antonis %A Blackburn, Jeremy %A Stringhini, Gianluca %A Zannettou, Savvas %A De Cristofaro, Emiliano %+ External Organizations External Organizations External Organizations Internet Architecture, MPI for Informatics, Max Planck Society External Organizations %T "Is it a Qoincidence?": A First Step Towards Understanding and Characterizing the QAnon Movement on Voat.co : %G eng %U http://hdl.handle.net/21.11116/0000-0007-89D2-1 %U https://arxiv.org/abs/2009.04885 %D 2020 %X Online fringe communities offer fertile grounds for users to seek and share paranoid ideas fueling suspicion of mainstream news, and outright conspiracy theories. Among these, the QAnon conspiracy theory has emerged in 2017 on 4chan, broadly supporting the idea that powerful politicians, aristocrats, and celebrities are closely engaged in a global pedophile ring. At the same time, governments are thought to be controlled by "puppet masters," as democratically elected officials serve as a fake showroom of democracy. In this paper, we provide an empirical exploratory analysis of the QAnon community on Voat.co, a Reddit-esque news aggregator, which has recently captured the interest of the press for its toxicity and for providing a platform to QAnon followers. More precisely, we analyze a large dataset from /v/GreatAwakening, the most popular QAnon-related subverse (the Voat equivalent of a subreddit) to characterize activity and user engagement. To further understand the discourse around QAnon, we study the most popular named entities mentioned in the posts, along with the most prominent topics of discussion, which focus on US politics, Donald Trump, and world events. We also use word2vec models to identify narratives around QAnon-specific keywords, and our graph visualization shows that some of QAnon-related ones are closely related to those from the Pizzagate conspiracy theory and "drops" by "Q." Finally, we analyze content toxicity, finding that discussions on /v/GreatAwakening are less toxic than in the broad Voat community. %K Computer Science, Computers and Society, cs.CY
Papadamou, K., Zannettou, S., Blackburn, J., De Cristofaro, E., Stringhini, G., & Sirivianos, M. (2020b). “It is just a flu”: Assessing the Effect of Watch History on YouTube’s Pseudoscientific Video Recommendations. Retrieved from https://arxiv.org/abs/2010.11638
(arXiv: 2010.11638)
Abstract
YouTube has revolutionized the way people discover and consume videos, becoming one of the primary news sources for Internet users. Since content on YouTube is generated by its users, the platform is particularly vulnerable to misinformative and conspiratorial videos. Even worse, the role played by YouTube's recommendation algorithm in unwittingly promoting questionable content is not well understood, and could potentially make the problem even worse. This can have dire real-world consequences, especially when pseudoscientific content is promoted to users at critical times, e.g., during the COVID-19 pandemic. In this paper, we set out to characterize and detect pseudoscientific misinformation on YouTube. We collect 6.6K videos related to COVID-19, the flat earth theory, the anti-vaccination, and anti-mask movements; using crowdsourcing, we annotate them as pseudoscience, legitimate science, or irrelevant. We then train a deep learning classifier to detect pseudoscientific videos with an accuracy of 76.1%. Next, we quantify user exposure to this content on various parts of the platform (i.e., a user's homepage, recommended videos while watching a specific video, or search results) and how this exposure changes based on the user's watch history. We find that YouTube's recommendation algorithm is more aggressive in suggesting pseudoscientific content when users are searching for specific topics, while these recommendations are less common on a user's homepage or when actively watching pseudoscientific videos. Finally, we shed light on how a user's watch history substantially affects the type of recommended videos.
Export
BibTeX
@online{Papadamou_arXiv2010.11638, TITLE = {"It is just a flu": {A}ssessing the Effect of Watch History on {YouTube}'s Pseudoscientific Video Recommendations}, AUTHOR = {Papadamou, Kostantinos and Zannettou, Savvas and Blackburn, Jeremy and De Cristofaro, Emiliano and Stringhini, Gianluca and Sirivianos, Michael}, LANGUAGE = {eng}, URL = {https://arxiv.org/abs/2010.11638}, EPRINT = {2010.11638}, EPRINTTYPE = {arXiv}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, ABSTRACT = {YouTube has revolutionized the way people discover and consume videos, becoming one of the primary news sources for Internet users. Since content on YouTube is generated by its users, the platform is particularly vulnerable to misinformative and conspiratorial videos. Even worse, the role played by YouTube's recommendation algorithm in unwittingly promoting questionable content is not well understood, and could potentially make the problem even worse. This can have dire real-world consequences, especially when pseudoscientific content is promoted to users at critical times, e.g., during the COVID-19 pandemic. In this paper, we set out to characterize and detect pseudoscientific misinformation on YouTube. We collect 6.6K videos related to COVID-19, the flat earth theory, the anti-vaccination, and anti-mask movements; using crowdsourcing, we annotate them as pseudoscience, legitimate science, or irrelevant. We then train a deep learning classifier to detect pseudoscientific videos with an accuracy of 76.1%. Next, we quantify user exposure to this content on various parts of the platform (i.e., a user's homepage, recommended videos while watching a specific video, or search results) and how this exposure changes based on the user's watch history. We find that YouTube's recommendation algorithm is more aggressive in suggesting pseudoscientific content when users are searching for specific topics, while these recommendations are less common on a user's homepage or when actively watching pseudoscientific videos. Finally, we shed light on how a user's watch history substantially affects the type of recommended videos.}, }
Endnote
%0 Report %A Papadamou, Kostantinos %A Zannettou, Savvas %A Blackburn, Jeremy %A De Cristofaro, Emiliano %A Stringhini, Gianluca %A Sirivianos, Michael %+ External Organizations Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T "It is just a flu": Assessing the Effect of Watch History on YouTube's Pseudoscientific Video Recommendations : %G eng %U http://hdl.handle.net/21.11116/0000-0007-89E5-C %U https://arxiv.org/abs/2010.11638 %D 2020 %X YouTube has revolutionized the way people discover and consume videos, becoming one of the primary news sources for Internet users. Since content on YouTube is generated by its users, the platform is particularly vulnerable to misinformative and conspiratorial videos. Even worse, the role played by YouTube's recommendation algorithm in unwittingly promoting questionable content is not well understood, and could potentially make the problem even worse. This can have dire real-world consequences, especially when pseudoscientific content is promoted to users at critical times, e.g., during the COVID-19 pandemic. In this paper, we set out to characterize and detect pseudoscientific misinformation on YouTube. We collect 6.6K videos related to COVID-19, the flat earth theory, the anti-vaccination, and anti-mask movements; using crowdsourcing, we annotate them as pseudoscience, legitimate science, or irrelevant. We then train a deep learning classifier to detect pseudoscientific videos with an accuracy of 76.1%. Next, we quantify user exposure to this content on various parts of the platform (i.e., a user's homepage, recommended videos while watching a specific video, or search results) and how this exposure changes based on the user's watch history. We find that YouTube's recommendation algorithm is more aggressive in suggesting pseudoscientific content when users are searching for specific topics, while these recommendations are less common on a user's homepage or when actively watching pseudoscientific videos. Finally, we shed light on how a user's watch history substantially affects the type of recommended videos. %K Computer Science, Computers and Society, cs.CY,cs.SI
Horta Ribeiro, M., Jhaver, S., Zannettou, S., Blackburn, J., De Cristofaro, E., Stringhini, G., & West, R. (2020). Does Platform Migration Compromise Content Moderation? Evidence from r/The_Donald and r/Incels. Retrieved from https://arxiv.org/abs/2010.10397
(arXiv: 2010.10397)
Abstract
When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated website. Previous work suggests that, within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of user base and activity on their new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald} and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.
Export
BibTeX
@online{HortaRibeiro_arXiv2010.10397, TITLE = {Does Platform Migration Compromise Content Moderation? {Evidence} from {r/The\_Donald} and {r/Incels}}, AUTHOR = {Horta Ribeiro, Manoel and Jhaver, Shagun and Zannettou, Savvas and Blackburn, Jeremy and De Cristofaro, Emiliano and Stringhini, Gianluca and West, Robert}, LANGUAGE = {eng}, URL = {https://arxiv.org/abs/2010.10397}, EPRINT = {2010.10397}, EPRINTTYPE = {arXiv}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, ABSTRACT = {When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated website. Previous work suggests that, within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of user base and activity on their new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald} and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment.}, }
Endnote
%0 Report %A Horta Ribeiro, Manoel %A Jhaver, Shagun %A Zannettou, Savvas %A Blackburn, Jeremy %A De Cristofaro, Emiliano %A Stringhini, Gianluca %A West, Robert %+ External Organizations External Organizations Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations %T Does Platform Migration Compromise Content Moderation? Evidence from r/The_Donald and r/Incels : %G eng %U http://hdl.handle.net/21.11116/0000-0007-89E0-1 %U https://arxiv.org/abs/2010.10397 %D 2020 %X When toxic online communities on mainstream platforms face moderation measures, such as bans, they may migrate to other platforms with laxer policies or set up their own dedicated website. Previous work suggests that, within mainstream platforms, community-level moderation is effective in mitigating the harm caused by the moderated communities. It is, however, unclear whether these results also hold when considering the broader Web ecosystem. Do toxic communities continue to grow in terms of user base and activity on their new platforms? Do their members become more toxic and ideologically radicalized? In this paper, we report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures. We analyze data from r/The_Donald} and r/Incels, two communities that were banned from Reddit and subsequently migrated to their own standalone websites. Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform, reducing the number of posts, active users, and newcomers. In spite of that, users in one of the studied communities (r/The_Donald) showed increases in signals associated with toxicity and radicalization, which justifies concerns that the reduction in activity may come at the expense of a more toxic and radical community. Overall, our results paint a nuanced portrait of the consequences of community-level moderation and can inform their design and deployment. %K Computer Science, Computers and Society, cs.CY
Wang, Y., Tahmasbi, F., Blackburn, J., Bradlyn, B., De Cristofaro, E., Magerman, D., … Stringhini, G. (2020). Understanding the Use of Fauxtography on Social Media. Retrieved from https://arxiv.org/abs/2009.11792
(arXiv: 2009.11792)
Abstract
Despite the influence that image-based communication has on online discourse, the role played by images in disinformation is still not well understood. In this paper, we present the first large-scale study of fauxtography, analyzing the use of manipulated or misleading images in news discussion on online communities. First, we develop a computational pipeline geared to detect fauxtography, and identify over 61k instances of fauxtography discussed on Twitter, 4chan, and Reddit. Then, we study how posting fauxtography affects engagement of posts on social media, finding that posts containing it receive more interactions in the form of re-shares, likes, and comments. Finally, we show that fauxtography images are often turned into memes by Web communities. Our findings show that effective mitigation against disinformation need to take images into account, and highlight a number of challenges in dealing with image-based disinformation.
Export
BibTeX
@online{Wang_arXiv2009.11792, TITLE = {Understanding the Use of Fauxtography on Social Media}, AUTHOR = {Wang, Yuping and Tahmasbi, Fatemeh and Blackburn, Jeremy and Bradlyn, Barry and De Cristofaro, Emiliano and Magerman, David and Zannettou, Savvas and Stringhini, Gianluca}, LANGUAGE = {eng}, URL = {https://arxiv.org/abs/2009.11792}, EPRINT = {2009.11792}, EPRINTTYPE = {arXiv}, YEAR = {2020}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Despite the influence that image-based communication has on online discourse, the role played by images in disinformation is still not well understood. In this paper, we present the first large-scale study of fauxtography, analyzing the use of manipulated or misleading images in news discussion on online communities. First, we develop a computational pipeline geared to detect fauxtography, and identify over 61k instances of fauxtography discussed on Twitter, 4chan, and Reddit. Then, we study how posting fauxtography affects engagement of posts on social media, finding that posts containing it receive more interactions in the form of re-shares, likes, and comments. Finally, we show that fauxtography images are often turned into memes by Web communities. Our findings show that effective mitigation against disinformation need to take images into account, and highlight a number of challenges in dealing with image-based disinformation.}, }
Endnote
%0 Report %A Wang, Yuping %A Tahmasbi, Fatemeh %A Blackburn, Jeremy %A Bradlyn, Barry %A De Cristofaro, Emiliano %A Magerman, David %A Zannettou, Savvas %A Stringhini, Gianluca %+ Internet Architecture, MPI for Informatics, Max Planck Society Internet Architecture, MPI for Informatics, Max Planck Society Internet Architecture, MPI for Informatics, Max Planck Society Internet Architecture, MPI for Informatics, Max Planck Society Internet Architecture, MPI for Informatics, Max Planck Society Internet Architecture, MPI for Informatics, Max Planck Society Internet Architecture, MPI for Informatics, Max Planck Society Internet Architecture, MPI for Informatics, Max Planck Society %T Understanding the Use of Fauxtography on Social Media : %G eng %U http://hdl.handle.net/21.11116/0000-0007-89D9-A %U https://arxiv.org/abs/2009.11792 %D 2020 %X Despite the influence that image-based communication has on online discourse, the role played by images in disinformation is still not well understood. In this paper, we present the first large-scale study of fauxtography, analyzing the use of manipulated or misleading images in news discussion on online communities. First, we develop a computational pipeline geared to detect fauxtography, and identify over 61k instances of fauxtography discussed on Twitter, 4chan, and Reddit. Then, we study how posting fauxtography affects engagement of posts on social media, finding that posts containing it receive more interactions in the form of re-shares, likes, and comments. Finally, we show that fauxtography images are often turned into memes by Web communities. Our findings show that effective mitigation against disinformation need to take images into account, and highlight a number of challenges in dealing with image-based disinformation. %K Computer Science, Computers and Society, cs.CY
2019
Zannettou, S., Caulfield, T., Bradlyn, B., De Cristofaro, E., Stringhini, G., & Blackburn, J. (2019). Characterizing the Use of Images in State-Sponsored Information Warfare Operations by Russian Trolls on Twitter. Retrieved from http://arxiv.org/abs/1901.05997
(arXiv: 1901.05997)
Abstract
State-sponsored organizations are increasingly linked to efforts aimed to exploit social media for information warfare and manipulating public opinion. Typically, their activities rely on a number of social network accounts they control, aka trolls, that post and interact with other users disguised as "regular" users. These accounts often use images and memes, along with textual content, in order to increase the engagement and the credibility of their posts. In this paper, we present the first study of images shared by state-sponsored accounts by analyzing a ground truth dataset of 1.8M images posted to Twitter by accounts controlled by the Russian Internet Research Agency. First, we analyze the content of the images as well as their posting activity. Then, using Hawkes Processes, we quantify their influence on popular Web communities like Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab, with respect to the dissemination of images. We find that the extensive image posting activity of Russian trolls coincides with real-world events (e.g., the Unite the Right rally in Charlottesville), and shed light on their targets as well as the content disseminated via images. Finally, we show that the trolls were more effective in disseminating politics-related imagery than other images.
Export
BibTeX
@online{Zannettou_arXIv1901.05997, TITLE = {Characterizing the Use of Images in State-Sponsored Information Warfare Operations by {R}ussian {Tr}olls on Twitter}, AUTHOR = {Zannettou, Savvas and Caulfield, Tristan and Bradlyn, Barry and De Cristofaro, Emiliano and Stringhini, Gianluca and Blackburn, Jeremy}, LANGUAGE = {eng}, URL = {http://arxiv.org/abs/1901.05997}, EPRINT = {1901.05997}, EPRINTTYPE = {arXiv}, YEAR = {2019}, MARGINALMARK = {$\bullet$}, ABSTRACT = {State-sponsored organizations are increasingly linked to efforts aimed to exploit social media for information warfare and manipulating public opinion. Typically, their activities rely on a number of social network accounts they control, aka trolls, that post and interact with other users disguised as "regular" users. These accounts often use images and memes, along with textual content, in order to increase the engagement and the credibility of their posts. In this paper, we present the first study of images shared by state-sponsored accounts by analyzing a ground truth dataset of 1.8M images posted to Twitter by accounts controlled by the Russian Internet Research Agency. First, we analyze the content of the images as well as their posting activity. Then, using Hawkes Processes, we quantify their influence on popular Web communities like Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab, with respect to the dissemination of images. We find that the extensive image posting activity of Russian trolls coincides with real-world events (e.g., the Unite the Right rally in Charlottesville), and shed light on their targets as well as the content disseminated via images. Finally, we show that the trolls were more effective in disseminating politics-related imagery than other images.}, }
Endnote
%0 Report %A Zannettou, Savvas %A Caulfield, Tristan %A Bradlyn, Barry %A De Cristofaro, Emiliano %A Stringhini, Gianluca %A Blackburn, Jeremy %+ Internet Architecture, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations %T Characterizing the Use of Images in State-Sponsored Information Warfare Operations by Russian Trolls on Twitter : %G eng %U http://hdl.handle.net/21.11116/0000-0005-767F-9 %U http://arxiv.org/abs/1901.05997 %D 2019 %X State-sponsored organizations are increasingly linked to efforts aimed to exploit social media for information warfare and manipulating public opinion. Typically, their activities rely on a number of social network accounts they control, aka trolls, that post and interact with other users disguised as "regular" users. These accounts often use images and memes, along with textual content, in order to increase the engagement and the credibility of their posts. In this paper, we present the first study of images shared by state-sponsored accounts by analyzing a ground truth dataset of 1.8M images posted to Twitter by accounts controlled by the Russian Internet Research Agency. First, we analyze the content of the images as well as their posting activity. Then, using Hawkes Processes, we quantify their influence on popular Web communities like Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab, with respect to the dissemination of images. We find that the extensive image posting activity of Russian trolls coincides with real-world events (e.g., the Unite the Right rally in Charlottesville), and shed light on their targets as well as the content disseminated via images. Finally, we show that the trolls were more effective in disseminating politics-related imagery than other images. %K cs.SI,Computer Science, Computers and Society, cs.CY

Previous publications

For an up-to-date full list of my publications please check my Google Scholar page

Research Interests

Overall, my research focuses on applying machine learning and data-driven quantitative analysis to understand emerging phenomena on the Web such as the spread of false information and hateful rhetorics. My research has been published at top-tier conferences such as ICWSM and ACM IMC. My co-authors and I have received best/distinguished paper awards at ACM IMC 2018 and CyberSafety 2019.

Honours & Awards

Reviewing Activity & Workshop / Conference positions

Teachings

  • Summer Term 2020: Seminar on Data-driven Approaches on Understanding Disinformation, Saarland University (Co-Instructor along with Dr. Yang Zhang from CISPA)

Recent Positions

December 2019 -now:
Postdoctoral Researcher at Max Planck Institute (Saarbrucken)

 

August 2018 - February 2019:
Visiting Scholar at University of Alabama at Birmingham

 

January 2018 - July 2018:
Research Intern at Telefonica Research (Barcelona)

 

July 2014 - December 2014:
Research Intern at NEC Labs Europe (Heidelberg)

Education

PhD in Computer Science, 2019:
Cyprus University of Technology, Cyprus
Advisor: Michael Sirivianos
Dissertation: Towards Understanding the Information Ecosystem Through the Lens of Multiple Web Communities

 

MSc in Computer Science, 2016:
Cyprus University of Technology, Cyprus

 

BSc in Computer Science, 2014:
Cyprus University of Technology, Cyprus