Who distributes hateful language on social media? An analysis of the US context
Einat Minkov, Nir Lotan - Department of Information Systems
Alon Zoizner - Department of Communication
Sagi Pendzel - Department of Computer Science
Natural Language Processing
PhD Grant 2021
Growing evidence points to increasing levels of uncivil political discourse on social media, characterized by rudeness, intolerance, hostility, and xenophobia. According to a 2019 public opinion poll, 8 out of 10 Americans say that aggressive rhetoric on social media increases the chances for political violence. Existing studies have mainly focused on the unique features of social media platforms such as Facebook or Twitter that potentially increase the virality of uncivil and hateful content. However, less attention has been devoted to understanding the characteristics of the users who disseminate online incivility.
Our research project focuses on user-level characteristics, such as ideology, age, or education level, and explores which groups are more prone to express uncivil behavior on Twitter. We do this by utilizing two steps: first, we automatically identify political incivility in more than 2 million tweets of 200,000 users, using a supervised text classifier developed for this project. Second, we predict personal user traits (e.g., income, education, ideology) based on the accounts they follow.
Initial findings show that users with higher levels of income and education not only tweet more about political issues but also use more foul language and intolerant rhetoric in their political tweets. This finding raises concerns since education is often viewed as a resource of democratic norms of mutual respect and political tolerance. We also find that Democratic users use more impolite language in their tweets compared to Republicans. These results provide a novel contribution for detecting which social groups are more prone to disseminate online political hate by combining state-of-the-art language processing and learning methodologies at scale.