Abstract
1- Introduction
2- Dataset
3- Methods
4- Results
5- Conclusions
References
Abstract
The problem of detecting bots, automated social media accounts governed by software but disguising as human users, has strong implications. For example, bots have been used to sway political elections by distorting online discourse, to manipulate the stock market, or to push anti-vaccine conspiracy theories that may have caused health epidemics. Most techniques proposed to date detect bots at the account level, by processing large amounts of social media posts, and leveraging information from network structure, temporal dynamics, sentiment analysis, etc. In this paper, we propose a deep neural network based on contextual long short-term memory (LSTM) architecture that exploits both content and metadata to detect bots at the tweet level: contextual features are extracted from user metadata and fed as auxiliary input to LSTM deep nets processing the tweet text. Another contribution that we make is proposing a technique based on synthetic minority oversampling to generate a large labeled dataset, suitable for deep nets training, from a minimal amount of labeled data (roughly 3000 examples of sophisticated Twitter bots). We demonstrate that, from just one single tweet, our architecture can achieve high classification accuracy (AUC > 96%) in separating bots from humans. We apply the same architecture to account-level bot detection, achieving nearly perfect classification accuracy (AUC > 99%). Our system outperforms previous state of the art while leveraging a small and interpretable set of features, yet requiring minimal training data.
Introduction
During the past decade, social media like Twitter and Facebook emerged as a widespread tool for massive-scale and real-time communication. These platforms have been promptly praised by some researchers for their power to democratize discussions [31], for example by allowing citizens of countries with oppressing regimes to openly discuss social and political issues. However, due to many recent reports of social media manipulation, including political propaganda, extremism, disinformation, etc., concerns about their abuse are mounting [19]. One example of social media manipulation is the use of bots (a.k.a. social bots, or sybils), user accounts controlled by software algorithms rather than human users. Bots have been extensively used for disingenuous purposes, ranging from swaying political opinions to perpetuating scams. Existing social media bots vary in sophistication. Some bots are very simple and merely retweet specific posts (based on some rules), whereas others are sophisticated and have the capability to even interact with human users.