Propaganda has become a hot topic since the 2016 U.S. presidential election cycle. Since then, we’ve learned of the significant efforts that State actors have made to misinform, misguide, and manipulate the institutions that we trust. Looking at Twitter, our president’s vocation and pastime of choice, we can see numerous reports of Russia aiming to gain an advantage by manipulating Americans’ behavior. And there is a common thread between these incidents—that Russian actors used accounts to promote inauthentic agendas.
An unsettling thought is that many of us did not discern that these accounts were fake. Consider this: many of us are passionate about certain topics and can occasionally be vocal about them in debates with others. And imagine you’re following such a debate only to realize that one of the accounts was not tweeting from a coffee shop in St. Louis, but rather an office building in St. Petersburg. All of the sudden, your trust in Twitter wavers, and it brings to light the tricky problem of identifying propaganda online.
Sure enough, Twitter is making an effort to maintain users’ trust and to take down offending accounts, but identifying such accounts is not a static process. If Twitter identifies content as propaganda, a bad actor can adjust their content so that they become undetected. Of course Twitter will respond in kind and will work to identify the newly changed content as propaganda. This back-and-forth is not unlike a pattern that occurs in computer security, where both security systems and hackers become more and more sophisticated as they discover, learn, and attempt to outsmart one another. Likewise, a successful propaganda classifier will account for this dynamic and should be robust to being exploited. This isn’t considered in your typical classification problem of, say, classifying animals as cats or not cats. Like, cats aren’t trying to trick you into thinking they’re not cats (although this would set a potentially adorable plot in a Blade Runner-esque sci-fi thriller).
One possible weakness in classifying propaganda directly is assuming that the labels being used are correct, e.g., a text identified as propaganda is in fact propaganda. As far as I know, there aren’t any propaganda experts who can identify propaganda with 100% accuracy! So the task becomes managing errors in labeled data. I’ve had the wonderful pleasure of using our lab’s greatest minds to label Tweets on Twitter. Using a majority vote, we can retrieve labels, with the understanding that our lab’s implicit biases play a role in the result of the label. I will not say what those biases are, but let’s just say any disparagement towards Oreos will be duly mislabeled as propaganda. From there, one can use robust methods like γ-divergence to reduce bias induced from mislabeling.
Another important question is what covariate information to collect, i.e., what attributes about Tweets can help us distinguish propaganda from ordinary content. For instance, we can analyze the text of a Tweet, which can give us useful information like sentiment or topics. But extracting information from text is generally a difficult task to do across multiple languages. An alternative would be to explore the behavior of a Tweet on Twitter. For instance, how do users engage with a Tweet? A natural way to formulate engagement is through network analysis. Indeed there are observed differences between so called misinforming users and ordinary users, such as the tendency for misinforming users to be within the core of retweet networks. These differences can provide us with an alternative means to identify propaganda. Another advantage of using network features is that propagandists rely on engaging users and masking that engagement would require more than simply changing the content of their Tweets. Selecting a robust feature set is crucial in dealing with propagandists’ ever-changing methods.
There are a myriad of other possible issues that arise from attempting to identify propaganda on Twitter, like sampling bias from using APIs, missing data, and reliable early detection. These issues bring with them a myriad of opportunities to understand the phenomenon of propaganda better, as well as to protect both our institutions and our trust in each other.
Khuzaima is a PhD Candidate whose research interests include machine learning and mobile health. His current research focuses on optimal treatment regimes on partially observed spatial networks. We asked a fellow Laber Labs colleague to ask Khuzaima a probing question.
Draw a graph of Alex’s productivity as a function of the length of his hair. Justify your answer.
I know this is a pressing question that has stumped scientists for decades, long before Alex joined the lab. But Alex is a complex individual. Much beyond his productive acuity, there is an abundance of dimensions to Alex’s constantly evolving character (“hair-acter?”). And lying below his flowing locks of hair are the answers to all of our questions. Above is a small sample of our discoveries to date. Each panel displays a picture of Alex, and his—err—hair as a heatmap of the designated attribute (the top of his head represents the shortest length of hair). As for an explanation, the relationship of Alex’s hair and these qualities is nothing short of magic, and as Dr. Laber would tell you, a magician never explains his tricks.