I’m blogging from Minneapolis, Minnesota, at the National Alliance for Media Arts and Culture (NAMAC) conference Leading Creatively 2012, where I’m representing the National Coalition Against Censorship (NCAC).
Earlier today I presented on a panel entitled Digital Frontiers: Copyright, Censorship, the Commons, and Privacy. The panel description read:
Can freedom of the press and the right to know survive the rough-and-tumble politics of lobbyist-addled Washington? Is your mobile device secure from search and seizure over whatever content you load onto it? Will the documentary feature you’ve labored over be accessible to your target audience? The Digital Frontier is up for grabs — and your participation in the debate will make a difference.
The panel was moderated by Nettrice Gaskins, President of NAMAC. Also on the panel were Chris Mitchell, Director of the Telecommunications as a Commons Initiative at the Institute of Local Self-Reliance, and Hank Shocklee, sonic architect, President of Shocklee Entertainment, and cofounder/producer of Public Enemy.
Nettrice presented our panel with five questions. Each of us had the opportunity to speak or pass on each. After 45 minutes, we broke out into small groups for in-depth discussions of each question.
The questions were:
What happens when technology democratizes the technique and the attitude and the method of creating? AND What happens when anybody can be an artist?
Regarding remixing Lawrence Lessig implies here that attempts to regulate copyright online will kill creativity (innovation). What is your response to Lessig’s argument (explain why)?
The Commons Question
What impact do you think commons-based peer-production, driven by new and emerging technologies will have on independent media organizations?
What measures do you think need to be taken to better guarantee anonymity?
What is your compelling argument to legislators and big media corporations who embrace censorship and are willing to sacrifice peoples’ civil liberties in their attacks on free knowledge and an open Internet?
I spent most of my time on this last question, and I thought I’d share some what I had to say here.
One of the things that makes both studying (and fighting) censorship in the 21st century so interesting (and difficult) is the ways in which it confounds traditional regulatory frameworks.
For most of the 1900s, if you asked a civil libertarian to describe the evils of censorship, she would have likely told you about governments trying to restrict or obstruct free speech and the flow of information. The National Coalition Against Censorship, for example, was founded in the wake of New York Times v. U.S., the “Pentagon Papers” case.
Battling government censorship is, in my opinion, a noble cause. It is also relatively straightforward.
The First Amendment provides a fairly robust framework for fighting government restrictions on speech. When viewed through the lens of the preclassical legal consciousness which dominated at the time of the Founding Fathers, the Amendment may be understood as being produced by, historian Elizabeth Mensch writes, “a Lockean model of the individual right holder confronting a potentially oppressive sovereign power.” The classical legal consciousness which followed only further developed the public/private dichotomy in law, conceiving of each as separate “spheres” properly kept separate, leading, eventually and perhaps inevitably, to a fetishization of contract and the Lochner era.
Without going too much further into an unnecessary analysis of legal history, the point is that our legal, political, and conceptual models all allow us to grasp what is at stake when a government censors speech. We may (and often do) disagree over the meaningful margins of the argument – when / if / what / how a government may intervene in expression – but it is an argument which is intellectually intelligible to anyone steeped in America’s brand of liberal theory.
Things become much murkier, however, when private intermediaries get in the game. The First Amendment doesn’t govern what companies do, and the American tradition of obeisance to contract tells us that when we use an information service we must take the bad with the good or take our business elsewhere.
This matters because there is a rapid expansion of privately owned speech intermediaries. Facebook, YouTube, Google: if you use it to communicate to an audience, then it is probably privately owned and administered.
Let me give a recent example. Josh Begley is a graduate student at the Integrated Technologies Program at NYU. Last month, Josh made an iPhone app called Drones+. Drones+ aggregated reports of drone strikes by the U.S. military and put them on a map, pushing notifications to its users whenever the American military bombed someone with an unmanned drone.
According to Wired, Apple has now rejected Josh’s app three times. First, because it was “not useful”; then, because of a misplaced logo; and finally (and most recently) because its content was “objectionable and crude.”
Imagine, for a moment, that this were a U.S. government agency trying to enjoin the distribution of a freely available computer application which did the same thing. People would go nuts. And they would go nuts because it would be seen as an unacceptable overreach of government authority in contravention of the First Amendment. But while Begley’s story certainly made news, he hasn’t been able to enjoin Apple, because Apple is under no obligation to publish his app.
This is the central struggle of anti-censorship activists in the digital age. Our speech is moving into private spaces, but the mental models which provided protection are unable to follow it there. The only thing activists can do to companies like Apple and Amazon is shame them by invoking and relying upon a generally distributed sense that people should be able to say what they want within poorly defined parameters. That’s a scary thought for people who believe in the free flow of information, but until we develop a framework which allows us to understand and respond to censorship within private intermediaries, it’s the only option we have.
This post was published originally on the Center for Civic Media blog here.
You can watch it on MIT TechTV or embedded below:
This fall I will be taking a leave from my job to be a full-time graduate student in CMS at MIT. More on that later. For now, this post lays out the contours of my proposed master’s thesis, both to help me organize my own thoughts and also in the hopes others will help me think about them.
In 2009 a loosely-knit group of conservative Diggers founded the Digg Patriots, a highly active “bury brigade.” Hosted in a Yahoo!Group and facilitated by a variety of post-tracking technologies, the Digg Patriots would link each other to what they deemed unacceptably “liberal” posts or posters so that they could team up to “bury” them by downvoting into obscurity. According to Phoenixtx, a founder of the Digg Patriots, “The more liberal stories that were buried the better chance conservative stories have to get to the front page. I’ll continue to bury their submissions until they change their ways and become conservatives.”
In 2008, a conservative blogger accused “the left” of similarly strategizing to flag conservative YouTube videos as spam or abusive for takedown. And, almost a year ago today, links to a U.K. strike site began being blocked as spammy on Facebook under strange and unexplained circumstances.
These incidents differ in important respects but they are characterized by a common dynamic: end-users repurposing certain algorithms to remove content from the stream of conversation.
It is my argument that today’s dominant information ecosystem which has widely distributed the means of information production has also widely distributed the means of informational removal, and that as Internet intermediaries have designed and deployed tools to incorporate “social” feedback into quality assurance algorithms, users have begun to strategically repurpose these tools in order to silence speech with which they disagree. And the goal of my research is to document and define user generated censorship as an emergent practice in relation to the mediating technologies which enable it.
Why “user generated censorship”?
For one, it nicely mirrors and invokes user generated content. Besides the rhetorical flourish, the invocation actually has an intellectual purpose, because the technological affordances and social practices which are associated with user generated content are the same affordances and practices which allow for their opposite. Put more plainly: the design of reddit lends itself to the earnest upvote but also the strategic downvote. The sorts of end-user power and input which characterize social production / Web 2.0 / whatever empowers users not only to produce content but also to remove it.
For another, the word “censorship” is controversial and contested, and I am going to try to use that historical weight to hammer home why this matters. Censorship – as opposed to repression – is something that we think of as being an exercise of centralized power. A pope censors. A king censors. Even a local autocrat draws their power ex officio.
But the reason we worry about censorship has nothing to do with the structure of power which enables it but rather the results which obtain: the silencing of ideas, of culture, of alternative perspectives.
“Internet censorship” has been done to death in the academic (and popular) literature. But it is all the old dynamic in a new medium. One worries about Google in China – or just plain China or Google alone – because of the power that large centralized authorities can wield over their constituents (and each other).
The Digg Patriots, on the other hand, have no office and no formal power which exceeds that of any other individual user. But through their strategic behavior they were able to repurpose the power usually reserved by and for centralized authority towards their own ends.
This is interesting and new and different, I think. Facebook has a lot of centralized power over the links shared in its news feed. It would never, I think, explicitly put content up for vote: “should we allow people to link to J30Strike?” Nor would it, I believe, allow its engineers to block content with which they politically disagree. But by allowing end users to make a nominally neutral decision (“is this spam”) and then enforcing that decision with the full power of a centralized network, Facebook – and everyplace else – has effectively delegated the power associated with the center of a network to a subset of the nodes at the edges.
So there is my project as a series of concentric circles. At its core, it is a journalistic enterprise, documenting what I believe to be an emergent dynamic between users and technology. But that dynamic operates within a larger context, not only of why information matters but how this new dynamic is an entirely new configuration of user power in the context of networked social intermediaries.
This evening I attended ‘Adapting Journalism to the Web’, a communications forum sponsored by the MIT Center for Civic Media, featuring NYU journalism professor Jay Rosen and Center director Ethan Zuckerman in a wide-ranging discussion about where / why journalism has been and where it is going.
On Wednesday, I was happy to attend a conversation with Assistant SecState PJ Crowley at the MIT Center for Future Civic Media. On Saturday, I was saddened to learn that as a result of that conversation – specifically after characterizing the inhumane conditions of Bradley Manning’s detention as “ridiculous, counterproductive, and stupid” – Mr Crowley resigned his office.
The C4 meeting was intended to be an informal conversation between top-flight academics and a leading government official about social media and state policy. Mr Crowley was candid and forthright in his remarks. He provided wonderful insights about his challenges at State. And, when he agreed at the end of the talk that his informal comments could be on record, he did so presumably so that those not fortunate enough to physically attend could still profit from his wisdom and experience.
Mr Crowley’s job
is was to represent the opinions of the Obama administration. And he did not do so accurately in this discussion. Some have argued that this discrepancy justifies his allegedly encouraged resignation.
However, Mr Crowley also stated clearly that these were his personal, not professional, beliefs. Candid, forthright discussions between policymakers and their constituents are a necessary condition for a functioning republic. All law may be politics, but not all policy need be positioning. And if a public official can’t speak honestly in a conversation with leading academics at MIT’s center for civic media, then there is no safe space left for honesty in governance.
Accordingly, I have added my name to an open letter issued by attendees in support of Mr Crowley. Will it accomplish anything? Probably not. But the least I can do to support someone who as candid as Crowley is to be just as forthright on his behalf. Because, under the circumstances, the resignation of PJ Crowley is ridiculous, counterproductive, and stupid.