Spamming social media to mute political dissent: the new face of censorship

A fairly interesting paper on the use of Twitter spam as a tool to mute political dissent was presented at the USENIX workshop on Free and Open Communication on the Internet (FOCI ’13) by John-Paul Verkamp and Minaxi Gupta (Indiana University). Here, you’ll find an audio presentation,  as well as the slides and the pdf of the study.

Verkamp, J-P and M Gupta, 2013. Five Incidents, One Theme: Twitter Spam as a Weapon to Drown Voices of Protest. Free and Open Communications on the Internet (Washington, DC, USA), USENIX.

The primary interest of this paper lies in the international comparison of 5 cases of politically motivated spam campaigns concurrent with activist mobilization on Twitter. This touches four countries (Syria, China, Russia, Mexico) from April 2011 to May 2012. Spam messages can be either politically oriented (expressing opinions or pointing to more or less related news stories) or opportunistic (mostly with URLs to commercial pages). In both cases the outcome is the same: spam tweets flood politically relevant hashtags, disrupt political conversation and interfere with the flow of information.

The ratio of spam/non spam messages vary, but spamming is always sustained and in three incidents (China 2, Mexico, and Russia) activists’ messages are positively dwarfed and utimately suppressed by spam.

SpamCensor

The timing is actually interesting. Most of the messages are automated spams delivered via scripts, peaking every hour at given times. But,  at least for Mexico and Russia, there is a clear tendency to mimick non-spam users and adapt to everyday patterns exhibited by human activity.

SpamURLs

Spam messages have significantly fewer retweets (except for the Syrian case). Moreover, accounts tend to have very few followers. Which means that spammers have to rely on direct targeting of users (by mentioning them in tweets). Unfortunately, this strategy has been successfully used by spammers to muddle activists’ campaigns in the past. The simple fact of containing a mention cannot qualify a message as spam: this does not help activists identify and filter them.

So other criteria have to be used. The authors suggest spammers account registration (which tends to occur in blocks) and their usernames (tend to show some similarities). As for block registration, the authors cannot have access to IP data, so they were not able to confirm the results of previous studies having demonstrated that spam accounts tend to be registered using machines all over the world, while non-spam are locally registered. Twitter spammers account appear to be generated automatically. The algorithm used to create their names can be reverse-engineered: almost 85% of them are exactly 15 characters in length (the maximum allowed by Twitter) and display some patterns (like {name}+{family name}+{random numbers}).

In sum, spammer usernames and in-blocks account registrations appear to be the only paths the authors suggest to follow if we want to find some way of stopping the censorship-motivated flooding of political conversations online. Any other feature differ dramatically across incidents, and designing common strategies based on them to limit spam tweets and accounts doesn’t seem promising. Especially because spammers tend to closely mimicking human activity.

In this case, fighting spam is not a matter of ‘clean communication’ but a way of allowing free expression of political dissent online. It matters because disagreement is central to democratic debate. As Finn Brunton states in his book Spam. A Shadow History of the Internet, spam is a remarkably consistent notion that over the years has encompassed a number of domains (technological, financial, medical, etc.). But one common trait of the various permutations of this socio-technological object is the fact of exploiting ‘exisiting aggregations of human attention’ and, in so doing, helping human aggregates to recognize themselves as communities of interests.