A Brief Guide To User-Generated Censorship

by on Jul.22, 2013, under general

This post is a brief overview of my master’s thesis.

In June 2011, as heat and hardship both beat down on Britain, progressive activists proposed a general strike to protest austerity measures. They created a website at J30Strike.org, posted information about the strike, and launched a publicity campaign through social media, focusing especially on sharing links through Facebook.

It’s easy to understand why. Facebook’s News Feed does more than just capture and redistribute eyeballs: like the front page of a major newspaper, it also articulates an agenda, assembling a summary digest of important events. “As more and more is shared,” wrote engineer Peter Deng after Facebook repositioned the News Feed in the user’s home page, “we want you to be able to find out everything that is going on in the world around you.” It’s a vision of social media as a kind of map, as an atlas informing users of worthwhile destinations and providing routes, in the form of links, through which they may be reached.

But, ten days before the strike, Facebook began notifying the activists that links to J30Strike.org could not be shared because they “[contained] blocked content that has previously been flagged as abusive or spammy” by other users. It erased all links to J30Strike. Then, with relentless, recursive efficiency, Facebook blocked links to sites which themselves linked to J30Strike, including blog posts informing other activists of the embargo. J30Strike suddenly vanished from the picture of the world projected by the News Feed. It wasn’t filtered by a government or corporation. Its servers weren’t disabled by hackers. J30Strike was still perfectly accessible but had become strangely unavailable. Like a rural village erased from a map of the English countryside, even if not from the countryside itself, the site was still there, but suddenly became much less likely to be found by casual travelers.

I knew some of the J30Strike activists. I wrote one of the blog posts which was blocked by Facebook. Watching J30Strike disappear from my map disturbed me. My News Feed had indeed appeared a comprehensive record of everything important going on in the world around me, but my sudden inability to link to J30Strike destabilized this perspective, revealing instead its highly contingent character. What I and others were allowed to see depended upon a complex and invisible confluence of forces largely beyond our control. I began to wonder: what else was being hidden from me? How? And by whom?

These are among the questions I explored in my recently completed master’s thesis on user-generated censorship.

What Is User-Generated Censorship?

In my thesis I defined user-generated censorship as:

  • Strategic interventions which suppress information by erasing or making to appear uninteresting certain sociotechnical pathways through which it can be found
  • Initiated neither at the behest nor the behalf of a formal public or private authority, but instead by ‘amateurs’ empowered by the distributed mechanisms of social media
  • Which, if revealed, strikes other users as an ‘unauthorized’ or ‘inappropriate’ use of these systems, which may be ascribed as a form of censorship

Put plainly: user-generated censorship is the strategic manipulation of social media to suppress speech.

Some Case Studies in User-Generated Censorship

Facebook War on Palin

In 2010 the journalist Brian Ries organized a campaign to flag a comment by Sarah Palin as “racist/hate speech” and succeeded in having it removed from Facebook. Ries’ was an example of what EFF Director of International Freedom of Expression Jillian York has called community policing campaigns on and through social media.

LibertyBot

In 2012, members of an anti-Ron Paul subreddit discovered that anything they posted, anything on reddit, was being downvoted into obscurity within seconds. They later learned another reddit user had written a program, called LibertyBot, which allowed Ron Paul supporters to voluntarily enroll their accounts in a botnet which would follow his oppponents around reddit and downvote them so deeply and quickly that their voices would be much harder to hear.


More recently, it was discovered that the owner of a popular image-sharing website, with estimated revenues of $1.6 million a month, had been propping up its popularity on reddit with bots which downvoted links to his competitors.

Google NegativeSEO

Since Google’s PageRank algorithm famously interprets inbound links as a kind of “vote” in favor of the page, one tactic used by unscrupulous marketers is to write bots which create thousands of links pointing to their client’s page to push them up the rankings. In 2012, Google began penalizing sites with large numbers of “spammy” links pointing to them in order to disincentivize the practice. Some of these marketers, however, simply flipped their business model, launching so-called NegativeSEO services by which clients could point spammy links at their competitors in order to sink them in ratings.

The Digg Patriots

Perhaps the most infamous case of user-generated censorship was that perpetrated by the Digg Patriots, a group of Digg users who coordinated to make the social news site more politically conservative than it would have been without their intervention. My thesis was grounded in a study of their messageboard archives which were leaked by their political enemies on Digg. I read almost 13,000 posts by the Patriots, in which they shared links to bury or mark as spam, discussed users they should target for suppression, and joked around with their friends. As one Patriot told me:


[The Digg Patriots were] comprised, or intended to be comprised, of Digg members with conservative political ideals as a means of countering the left-leaning material submitted to Digg that was making the front page of the site on a regular basis….these were submissions that any member of the Digg Patriots would have marked for burial upon encounter. But by organizing under a Yahoo group, the first member to encounter it could immediately let the others know it was there. This had the desired impact — that a mass of early burials would keep it off the front page when the same number of burials spread over time would not….The Digg Patriots could have been more successful [but] it did have the capability of keeping the field clear of “debris and detritus” so that other news stories could get to the front page.

Of course, all social media sites mediated by organizational algorithms make information more or less available. That’s what upvotes and downvotes and report as spam is supposed to do. But one of the most critical aspects of user-generated censorship for me is the mens rea, the guilty mind; the idea that this is is a form of manipulation, or gaming, or otherwise strategic mustering of a system towards some goal. As another Patriot wrote defending their actions:


Again the question arises about the validity of us organizing through email…
I feel we are far outnumbered. So does that make what we do right?

To fight for what is right and just, I would say yes. Hopefully more people will see our beliefs as the right way. We’re called the right for a reason.

Social media are often conceptualized as neutral, natural sifters of collective intelligence: as systems which appear to operate, not which are operated through or operated on. Yochai Benkler, for example, has famously argued that the admittedly unequal allocation of attention through social media are the result of random distributions of interest and not bottlenecks of control. Yet the Patriots not only wanted to make an impact (to change the distribution of attention through Digg): they actually preferred Digg, as opposed to other, more conservative communities, because they it offered them the opportunity to be combative and force some sort of change.


if digg loses it’s competitive nature, and let’s face it the real satisfaction [is] in burying the fools and hearing them cry endlessly about it, where is the fun ? the whole “everyone wins because the only people i will relate with agree with me” thing is, how can i say it, too freakin’ libtard for me. i don’t use my twitter, facebook, or myspace accounts and landed on digg because i like fighting with my enemies and i really like winning. The dp’s have had an impact. where will the impact be if you’re swimming with the current ?

The Patriots developed innovative strategies to shift the political composition of Digg. For example, while comment sections have often been heralded as a venue for democratic discourse (however vulgar), the Patriots actually treated comments instrumentally. They deduced that the Digg algorithm treated comment activity as an indicator of interest, pushing more active posts higher and sinking less active posts lower, so they developed a strong norm of not commenting on liberal posts while creating purposefully outrageous comments on conservative posts to bait liberal users into a frenzied discussion.


Please, Please,stop the discussions.  You are playing right into their hands.If you just can’t help yourself, then Maybe you should find another outlet for your frustration. I spend far too much time on Digg to see it wasted by immature sniping.I hope no one is offended, but remember why we are here. We want to Depress the progressive stories, while encouraging conservative ones.

One of the most interesting tactics developed by the Patriots was their use of “mutuals.” Mutuals were symmetrical relationships on Digg. The Patriots began to recruit, evaluate, and maintain mutuals not on the basis of any shared interests or values but by how rapidly, regularly, and reliably they would upvote conservative submissions. Several Patriots amassed small armies of hundreds of mutuals who functioned trusted lieutenants to and through whom their influence could be extended.


Okay folks, want to hit FP often? Follow J’s lead. He’s got 90+ friends all who digg early (this is key). If you can cultivate 90-100 friends like this your subs will hit on a regular basis, but cultivating this many GOOD friends requires you to do the same for them. Gotta digg ’em early and never miss.

Why User-Generated Censorship Matters

User-generated censorship significantly complicates our understanding of the world picture stitched together by social media.

One popular way of understanding social media is as aggregators of the “wisdom of crowds,” an explanation most famously associated with the work of James Surowiecki. Surowiecki basically argues that if you aggregate information from a large number of independent people then you will end up with the “correct” information. Leaving aside for a moment the important question of epistemological validity (i.e., is knowledge uncovered, or is knowledge constructed), it’s easy to see how, from a Surowieckian perspective, a coordinated group like the Digg Patriots would trigger information cascades while the very logic of “wisdom of crowds” launders their many invisible hands, masking their machinations behind a democratic facade.

Another popular way of understanding social media is as core components of the networked public sphere proposed by Yochai Benkler. For Benkler, the chief value of social media (broadly defined) is that they broadly distribute the tasks of filtering and accrediting quality content across many individuals, such that there are fewer bottlenecks and points of failure or control in comparison to the mass media. However, the case studies suggest that there are influential algorithmic points of failure and control (by rapidly downvoting, by marking as spam, and so on) which are now being exploited to shift the political composition of social media more than the purely random distribution of interests proposed by Benkler.

User-generated censorship forces us to see social media, not as neutral aggregators of wisdom or interests, but instead as complex, contingent systems, systems under the direct control of no one but capable of being invisibly influenced by many actors, human and nonhuman. The front page of reddit, or the Facebook News Feed, are artifacts of a politics between friends, enemies, bots, algorithms, and a multitude of other actors.

When I try to explain user-generated censorship to people, I often find myself returning to the metaphor of the map.

It often feels as if we are adrift in an vast sea of the Internet; not a library, but an archipelago of Babel, dotted by infinite islands. Social media can be understood as a kind of map, a compendium of routes and ports of entry composed by the apparently earnest, independent, disinterested evaluations made by our anonymous far-flung fellow travelers.

Of course these evaluations are actually quite messy: performative, political, reciprocal, etc. Yet their imperfections are only noticed when they are noticed. Twitter has frequently been accused of censorship when certain topics do not trend. But, as Tarleton Gillespie observes, the failure of a topic to trend is more properly understood as a disagreement between what these users think should be trending and what Twitter’s algorithms think should be trending. The specter of censorship arises out of the spooky gap between what the system is expected to produce and what it actually does. Like with Heidegger’s hammer, or Latour’s black box, social media simply seem to work – until they suddenly don’t, at which point they pop to the fore of consciousness to be inspected for human imperfections packed inside.

It is not so much that the artifacts of social media (the front page of reddit; the Facebook News Feed) have a politics as much as that they are artifacts of a politics: they are what is left behind to be found after all the political work of assembling them has been done. That doesn’t mean they aren’t useful or usable, but it does mean that we must understand (and study!) them as being made, not found; composed, not discovered.

So what now? In the conclusion of my thesis, I advocated two complementary approaches for studying user-generated censorship specifically and social media generally: ethnography and archaeology. Ethnography allows us to identify the actors, their ontologies and epistemologies, and trace the outline of what is happening. It is myopically focused on what the actors themselves can see. Archaeology, on the other hand, allows us to “zoom out” and compare the artifacts against each other so that insights might emerge from the gaps between them. For example, the work done here at Civic by my colleague Nate Matias to track gender in the news and Twitter followers allows users to notice differences in the composition of their world. The most productive path forward is to deploy these complementary approaches in complementary fashion, for the relevant distinction is not between qualitative and quantitative methods, but rather between tracing the assembly of an artifact and comparing artifacts once assembled. By these means we may come to see the maps of social media for what they are: incomplete and unnatural in any given configuration, yet indispensable in function for navigating the unfathomable largeness of the networked world.

This entry was originally posted to the blog of the Center for Civic Media

No comments for this entry yet...

Leave a Reply