Tag: research

Thinking About User Generated Censorship

by on Jun.14, 2012, under general

This fall I will be taking a leave from my job to be a full-time graduate student in CMS at MIT. More on that later. For now, this post lays out the contours of my proposed master’s thesis, both to help me organize my own thoughts and also in the hopes others will help me think about them.

In 2009 a loosely-knit group of conservative Diggers founded the Digg Patriots, a highly active “bury brigade.” Hosted in a Yahoo!Group and facilitated by a variety of post-tracking technologies, the Digg Patriots would link each other to what they deemed unacceptably “liberal” posts or posters so that they could team up to “bury” them by downvoting into obscurity. According to Phoenixtx, a founder of the Digg Patriots, “The more liberal stories that were buried the better chance conservative stories have to get to the front page. I’ll continue to bury their submissions until they change their ways and become conservatives.”

In 2008, a conservative blogger accused “the left” of similarly strategizing to flag conservative YouTube videos as spam or abusive for takedown. And, almost a year ago today, links to a U.K. strike site began being blocked as spammy on Facebook under strange and unexplained circumstances.

These incidents differ in important respects but they are characterized by a common dynamic: end-users repurposing certain algorithms to remove content from the stream of conversation.

It is my argument that today’s dominant information ecosystem which has widely distributed the means of information production has also widely distributed the means of informational removal, and that as Internet intermediaries have designed and deployed tools to incorporate “social” feedback into quality assurance algorithms, users have begun to strategically repurpose these tools in order to silence speech with which they disagree. And the goal of my research is to document and define user generated censorship as an emergent practice in relation to the mediating technologies which enable it.

Why “user generated censorship”?

For one, it nicely mirrors and invokes user generated content. Besides the rhetorical flourish, the invocation actually has an intellectual purpose, because the technological affordances and social practices which are associated with user generated content are the same affordances and practices which allow for their opposite. Put more plainly: the design of reddit lends itself to the earnest upvote but also the strategic downvote. The sorts of end-user power and input which characterize social production / Web 2.0 / whatever empowers users not only to produce content but also to remove it.

For another, the word “censorship” is controversial and contested, and I am going to try to use that historical weight to hammer home why this matters. Censorship – as opposed to repression – is something that we think of as being an exercise of centralized power. A pope censors. A king censors. Even a local autocrat draws their power ex officio.

But the reason we worry about censorship has nothing to do with the structure of power which enables it but rather the results which obtain: the silencing of ideas, of culture, of alternative perspectives.

“Internet censorship” has been done to death in the academic (and popular) literature. But it is all the old dynamic in a new medium. One worries about Google in China – or just plain China or Google alone – because of the power that large centralized authorities can wield over their constituents (and each other).

The Digg Patriots, on the other hand, have no office and no formal power which exceeds that of any other individual user. But through their strategic behavior they were able to repurpose the power usually reserved by and for centralized authority towards their own ends.

This is interesting and new and different, I think. Facebook has a lot of centralized power over the links shared in its news feed. It would never, I think, explicitly put content up for vote: “should we allow people to link to J30Strike?” Nor would it, I believe, allow its engineers to block content with which they politically disagree. But by allowing end users to make a nominally neutral decision (“is this spam”) and then enforcing that decision with the full power of a centralized network, Facebook – and everyplace else – has effectively delegated the power associated with the center of a network to a subset of the nodes at the edges.

So there is my project as a series of concentric circles. At its core, it is a journalistic enterprise, documenting what I believe to be an emergent dynamic between users and technology. But that dynamic operates within a larger context, not only of why information matters but how this new dynamic is an entirely new configuration of user power in the context of networked social intermediaries.

6 Comments :, , , , more...

Dispatches From The Front: 12 Hours On ChatRoulette

by on Feb.12, 2010, under general

I can’t stand a lot of popular (and, sadly, sometimes scholarly) writing about cyberspace. So much of it is breathless hype, superficial snapshots, and baseless theoretical wankery.

For example, when Second Life was booming, a lot of people were writing a lot of things about its business and investment potential, without ever having once walked around in it. That’s a critical difference, because you stop thinking about Second Life as next international marketplace the first time you’re caged and accosted by an anthropomorphic fox, endowed in a diverse, imaginative, and physically impossible manner. The data tell a different story.

Now, people like Eszter Hargittai have been diving deep into the data for years. But the great thing is that now everyone is doing it.

Take, for example, ChatRoulette. ChatRoulette is a service whereby any two users with webcams can be randomly assigned to one another. You log in, you click go, boom, you’re chatting with another random user.

Now, just from that, I might imagine all sorts of things about ChatRoulette. I might characterize ChatRoulette as the next wave in deliberative discourse, allowing individuals from different backgrounds and cultures to talk face to face in a totally unscripted and unforced fashion. I might prophesize an even smaller global village, where people could simply reach out to one another, connect, say hello, and find out that hey, someone cares. I could create all manner of handwaving, hypothetical bullshit.

Luckily, we have data. Not drawn from any peer-reviewed journal. This is ChatRoulette, as documented by one intrepid, devoted, and bored reddit user, who spent 12 hours on the site and posted the results:

1276 cams viewed

  • Conversations 34
  • Avg. Conversation Duration: 23.7 sec
  • Long: 5 min 56 sec
  • 298 naked masturbating men
  • 678 non masturbating males
  • 152 fake cams
  • 148 females or mixed m/f
  • boobs shown you ask? 0.0
  • Cum shots: 2
  • man having sex with racoon viewed 23 times
  • not counted: repeats, no cam, empty rooms people with dolls and signs.

Edit: I generally waited untill the other person switched the cam, although for fake vids I switched the cam Edit: Logged on and saw my first legit real girl with exposed breasts. Final Edit: will log 12 more hrs. after my show to get a complete 24 hr sample.

This, ladies and gents, is what good Internet research and analysis looks like. So thank you to the brave few on the front lines.

Leave a Comment :, , more...

Bailenson

by on Feb.09, 2010, under general

I attended a Berkman Center luncheon the other day where the keynote speaker was Jeremy Bailenson. Bailenson runs the Virtual Human Interaction Lab at Stanford. From their page:


The mission of the Virtual Human Interaction Lab is to understand the dynamics and implications of interactions among people in immersive virtual reality simulations (VR), and other forms of human digital representations in media, communication systems, and games. Researchers in the lab are most concerned with understanding the social interaction that occurs within the confines of VR, and the majority of our work is centered on using empirical, behavioral science methodologies to explore people as they interact in these digital worlds.

The talk (video, audio at link) was really great:

Unlike telephone conversations and videoconferences, avatars – representations of people in virtual environments – have the ability to control their physical appearance and behavioral actions in the eyes of their conversational partners, strategically enhancing or hiding features and nonverbal signals in real-time. Jeremy Bailenson – founding director of Stanford University’s Virtual Human Interaction Lab – explores the manners in which avatars change the nature of remote communication, and how these transformations can impact the ability to influence others in social and professional contexts.

A lot has been written about cyberspace law and policy, but not a lot of people (to my knowledge, at least) have done the heavy-lifting on exploring how people actually behave in these environments. Even the HCI literature, or that to which I have been exposed, tends to focus on usability, rather than framing effects and so forth.

I was very much impressed by the talk Bailenson gave, and by the work his lab is doing. While I’m not sold on the merits of all of it – I have a deep and ineradicable bias against anything that takes Second Life seriously – the point is that this is the sort of research that needs to be pursued if we are to understand how digital environments affected human communications and interaction.

Read their papers. Or, at least, check out the talk. It’s good stuff.

Leave a Comment :, , , , , , , more...