general

A Brief Guide To User-Generated Censorship

by on Jul.22, 2013, under general

This post is a brief overview of my master’s thesis.

In June 2011, as heat and hardship both beat down on Britain, progressive activists proposed a general strike to protest austerity measures. They created a website at J30Strike.org, posted information about the strike, and launched a publicity campaign through social media, focusing especially on sharing links through Facebook.

It’s easy to understand why. Facebook’s News Feed does more than just capture and redistribute eyeballs: like the front page of a major newspaper, it also articulates an agenda, assembling a summary digest of important events. “As more and more is shared,” wrote engineer Peter Deng after Facebook repositioned the News Feed in the user’s home page, “we want you to be able to find out everything that is going on in the world around you.” It’s a vision of social media as a kind of map, as an atlas informing users of worthwhile destinations and providing routes, in the form of links, through which they may be reached.

But, ten days before the strike, Facebook began notifying the activists that links to J30Strike.org could not be shared because they “[contained] blocked content that has previously been flagged as abusive or spammy” by other users. It erased all links to J30Strike. Then, with relentless, recursive efficiency, Facebook blocked links to sites which themselves linked to J30Strike, including blog posts informing other activists of the embargo. J30Strike suddenly vanished from the picture of the world projected by the News Feed. It wasn’t filtered by a government or corporation. Its servers weren’t disabled by hackers. J30Strike was still perfectly accessible but had become strangely unavailable. Like a rural village erased from a map of the English countryside, even if not from the countryside itself, the site was still there, but suddenly became much less likely to be found by casual travelers.

I knew some of the J30Strike activists. I wrote one of the blog posts which was blocked by Facebook. Watching J30Strike disappear from my map disturbed me. My News Feed had indeed appeared a comprehensive record of everything important going on in the world around me, but my sudden inability to link to J30Strike destabilized this perspective, revealing instead its highly contingent character. What I and others were allowed to see depended upon a complex and invisible confluence of forces largely beyond our control. I began to wonder: what else was being hidden from me? How? And by whom?

These are among the questions I explored in my recently completed master’s thesis on user-generated censorship.

What Is User-Generated Censorship?

In my thesis I defined user-generated censorship as:

  • Strategic interventions which suppress information by erasing or making to appear uninteresting certain sociotechnical pathways through which it can be found
  • Initiated neither at the behest nor the behalf of a formal public or private authority, but instead by ‘amateurs’ empowered by the distributed mechanisms of social media
  • Which, if revealed, strikes other users as an ‘unauthorized’ or ‘inappropriate’ use of these systems, which may be ascribed as a form of censorship

Put plainly: user-generated censorship is the strategic manipulation of social media to suppress speech.

Some Case Studies in User-Generated Censorship

Facebook War on Palin

In 2010 the journalist Brian Ries organized a campaign to flag a comment by Sarah Palin as “racist/hate speech” and succeeded in having it removed from Facebook. Ries’ was an example of what EFF Director of International Freedom of Expression Jillian York has called community policing campaigns on and through social media.

LibertyBot

In 2012, members of an anti-Ron Paul subreddit discovered that anything they posted, anything on reddit, was being downvoted into obscurity within seconds. They later learned another reddit user had written a program, called LibertyBot, which allowed Ron Paul supporters to voluntarily enroll their accounts in a botnet which would follow his oppponents around reddit and downvote them so deeply and quickly that their voices would be much harder to hear.


More recently, it was discovered that the owner of a popular image-sharing website, with estimated revenues of $1.6 million a month, had been propping up its popularity on reddit with bots which downvoted links to his competitors.

Google NegativeSEO

Since Google’s PageRank algorithm famously interprets inbound links as a kind of “vote” in favor of the page, one tactic used by unscrupulous marketers is to write bots which create thousands of links pointing to their client’s page to push them up the rankings. In 2012, Google began penalizing sites with large numbers of “spammy” links pointing to them in order to disincentivize the practice. Some of these marketers, however, simply flipped their business model, launching so-called NegativeSEO services by which clients could point spammy links at their competitors in order to sink them in ratings.

The Digg Patriots

Perhaps the most infamous case of user-generated censorship was that perpetrated by the Digg Patriots, a group of Digg users who coordinated to make the social news site more politically conservative than it would have been without their intervention. My thesis was grounded in a study of their messageboard archives which were leaked by their political enemies on Digg. I read almost 13,000 posts by the Patriots, in which they shared links to bury or mark as spam, discussed users they should target for suppression, and joked around with their friends. As one Patriot told me:


[The Digg Patriots were] comprised, or intended to be comprised, of Digg members with conservative political ideals as a means of countering the left-leaning material submitted to Digg that was making the front page of the site on a regular basis….these were submissions that any member of the Digg Patriots would have marked for burial upon encounter. But by organizing under a Yahoo group, the first member to encounter it could immediately let the others know it was there. This had the desired impact — that a mass of early burials would keep it off the front page when the same number of burials spread over time would not….The Digg Patriots could have been more successful [but] it did have the capability of keeping the field clear of “debris and detritus” so that other news stories could get to the front page.

Of course, all social media sites mediated by organizational algorithms make information more or less available. That’s what upvotes and downvotes and report as spam is supposed to do. But one of the most critical aspects of user-generated censorship for me is the mens rea, the guilty mind; the idea that this is is a form of manipulation, or gaming, or otherwise strategic mustering of a system towards some goal. As another Patriot wrote defending their actions:


Again the question arises about the validity of us organizing through email…
I feel we are far outnumbered. So does that make what we do right?

To fight for what is right and just, I would say yes. Hopefully more people will see our beliefs as the right way. We’re called the right for a reason.

Social media are often conceptualized as neutral, natural sifters of collective intelligence: as systems which appear to operate, not which are operated through or operated on. Yochai Benkler, for example, has famously argued that the admittedly unequal allocation of attention through social media are the result of random distributions of interest and not bottlenecks of control. Yet the Patriots not only wanted to make an impact (to change the distribution of attention through Digg): they actually preferred Digg, as opposed to other, more conservative communities, because they it offered them the opportunity to be combative and force some sort of change.


if digg loses it’s competitive nature, and let’s face it the real satisfaction [is] in burying the fools and hearing them cry endlessly about it, where is the fun ? the whole “everyone wins because the only people i will relate with agree with me” thing is, how can i say it, too freakin’ libtard for me. i don’t use my twitter, facebook, or myspace accounts and landed on digg because i like fighting with my enemies and i really like winning. The dp’s have had an impact. where will the impact be if you’re swimming with the current ?

The Patriots developed innovative strategies to shift the political composition of Digg. For example, while comment sections have often been heralded as a venue for democratic discourse (however vulgar), the Patriots actually treated comments instrumentally. They deduced that the Digg algorithm treated comment activity as an indicator of interest, pushing more active posts higher and sinking less active posts lower, so they developed a strong norm of not commenting on liberal posts while creating purposefully outrageous comments on conservative posts to bait liberal users into a frenzied discussion.


Please, Please,stop the discussions.  You are playing right into their hands.If you just can’t help yourself, then Maybe you should find another outlet for your frustration. I spend far too much time on Digg to see it wasted by immature sniping.I hope no one is offended, but remember why we are here. We want to Depress the progressive stories, while encouraging conservative ones.

One of the most interesting tactics developed by the Patriots was their use of “mutuals.” Mutuals were symmetrical relationships on Digg. The Patriots began to recruit, evaluate, and maintain mutuals not on the basis of any shared interests or values but by how rapidly, regularly, and reliably they would upvote conservative submissions. Several Patriots amassed small armies of hundreds of mutuals who functioned trusted lieutenants to and through whom their influence could be extended.


Okay folks, want to hit FP often? Follow J’s lead. He’s got 90+ friends all who digg early (this is key). If you can cultivate 90-100 friends like this your subs will hit on a regular basis, but cultivating this many GOOD friends requires you to do the same for them. Gotta digg ’em early and never miss.

Why User-Generated Censorship Matters

User-generated censorship significantly complicates our understanding of the world picture stitched together by social media.

One popular way of understanding social media is as aggregators of the “wisdom of crowds,” an explanation most famously associated with the work of James Surowiecki. Surowiecki basically argues that if you aggregate information from a large number of independent people then you will end up with the “correct” information. Leaving aside for a moment the important question of epistemological validity (i.e., is knowledge uncovered, or is knowledge constructed), it’s easy to see how, from a Surowieckian perspective, a coordinated group like the Digg Patriots would trigger information cascades while the very logic of “wisdom of crowds” launders their many invisible hands, masking their machinations behind a democratic facade.

Another popular way of understanding social media is as core components of the networked public sphere proposed by Yochai Benkler. For Benkler, the chief value of social media (broadly defined) is that they broadly distribute the tasks of filtering and accrediting quality content across many individuals, such that there are fewer bottlenecks and points of failure or control in comparison to the mass media. However, the case studies suggest that there are influential algorithmic points of failure and control (by rapidly downvoting, by marking as spam, and so on) which are now being exploited to shift the political composition of social media more than the purely random distribution of interests proposed by Benkler.

User-generated censorship forces us to see social media, not as neutral aggregators of wisdom or interests, but instead as complex, contingent systems, systems under the direct control of no one but capable of being invisibly influenced by many actors, human and nonhuman. The front page of reddit, or the Facebook News Feed, are artifacts of a politics between friends, enemies, bots, algorithms, and a multitude of other actors.

When I try to explain user-generated censorship to people, I often find myself returning to the metaphor of the map.

It often feels as if we are adrift in an vast sea of the Internet; not a library, but an archipelago of Babel, dotted by infinite islands. Social media can be understood as a kind of map, a compendium of routes and ports of entry composed by the apparently earnest, independent, disinterested evaluations made by our anonymous far-flung fellow travelers.

Of course these evaluations are actually quite messy: performative, political, reciprocal, etc. Yet their imperfections are only noticed when they are noticed. Twitter has frequently been accused of censorship when certain topics do not trend. But, as Tarleton Gillespie observes, the failure of a topic to trend is more properly understood as a disagreement between what these users think should be trending and what Twitter’s algorithms think should be trending. The specter of censorship arises out of the spooky gap between what the system is expected to produce and what it actually does. Like with Heidegger’s hammer, or Latour’s black box, social media simply seem to work – until they suddenly don’t, at which point they pop to the fore of consciousness to be inspected for human imperfections packed inside.

It is not so much that the artifacts of social media (the front page of reddit; the Facebook News Feed) have a politics as much as that they are artifacts of a politics: they are what is left behind to be found after all the political work of assembling them has been done. That doesn’t mean they aren’t useful or usable, but it does mean that we must understand (and study!) them as being made, not found; composed, not discovered.

So what now? In the conclusion of my thesis, I advocated two complementary approaches for studying user-generated censorship specifically and social media generally: ethnography and archaeology. Ethnography allows us to identify the actors, their ontologies and epistemologies, and trace the outline of what is happening. It is myopically focused on what the actors themselves can see. Archaeology, on the other hand, allows us to “zoom out” and compare the artifacts against each other so that insights might emerge from the gaps between them. For example, the work done here at Civic by my colleague Nate Matias to track gender in the news and Twitter followers allows users to notice differences in the composition of their world. The most productive path forward is to deploy these complementary approaches in complementary fashion, for the relevant distinction is not between qualitative and quantitative methods, but rather between tracing the assembly of an artifact and comparing artifacts once assembled. By these means we may come to see the maps of social media for what they are: incomplete and unnatural in any given configuration, yet indispensable in function for navigating the unfathomable largeness of the networked world.

This entry was originally posted to the blog of the Center for Civic Media

Leave a Comment more...

Sorry, Nerds, But Obama Was Right About The Jedi Meld (And Metaphysics)

by on Mar.01, 2013, under general

Today, during a heated discussion on the sequester, a frustrated President Barack Obama made the following statement to the press corps as they challenged him to show more leadership in negotiations:

“I’m presenting a fair deal, the fact that they don’t take it means that I should somehow, you know, do a Jedi mind meld with these folks and convince them to do what’s right.”

Various commentators immediately criticized the President for, as they say, crossing the streams. Jim Kuhnhenn of the Associated Press wrote that Obama “…mixed his sci-fi metaphors…The Jedi reference comes from Star Wars, and the mind meld from Star Trek.” Xeni Jardin of BoingBoing wrote that the President “tried to drop a gratuitous nerd culture reference…and blew it.” #ObamaSciFiQuotes began trending on Twitter, mocking Obama for his apparent misstep.

Out of a nascent sense of patriotism, and animated by the spirit of my friend Matt Stempeck’s LazyTruth, I now reluctantly but firmly step forward in defense of my President against these reckless and ill-founded accusations. Obama did not, as Jardin claimed, “blow” his reference: he was more correct than any of his critics could possibly imagine.

First, as a friend pointed out, there is a Jedi Meld well established within the admittedly capacious but nonetheless official contours of the Star Wars: Expanded Universe. In Outbound Flight, a novel written by the prolific Timothy Zahn, the Jedi Master Jorus C’baoth instructs a young Anakin Skywalker that the Jedi Meld “permits a group of Jedi to connect their minds so closely as to act as a single person.” (emphasis added)

According to Wookieepedia, the Jedi Meld was deployed by dozens of Jedi, including (but not limited to) Obi-Wan Kenobi, Anakin Skywalker, Luke Skywalker, Mara Jade Skywalker, and Anakin, Jacen, and Jaina Solo, across dozens officially-licensed books. Indeed, its recovery and redevelopment, principally by the Solo children, was an important turning point in the Yuuzhan Vong War as chronicled exhaustively in the New Jedi Order series.

But not only is the Jedi Meld, through general acceptance and uncontroversial use, authoritatively established within the official Star Wars universe: it was the right reference for Obama to make.

Jedi Mind Tricks, according to Wookieepedia, “refer to a spectrum of Force powers which influenced the thoughts of sentient creatures”; the Vulcan Mind-meld, according to Wikipedia, “is a technique for sharing thoughts, experiences, memories, and knowledge with another individual.”

Both are powerful methods of influence, to be sure, but neither fully captures what Obama was suggesting when he said he could not “do a Jedi mind meld with these folks and convince them to do what’s right.” (emphasis added) Rather, the most appropriate method for Obama would be a Jedi Meld. For it is the Jedi Meld, rather than its more familiar cousins, which would allow Obama to be as effective as he suggested he would like to be, and arguably the only one which allows him to be effective in the particular way he describes.

This argument is best understood through the framework of actor-network theory as developed by Bruno Latour. ANT is a huge box to unpack in a blog post, so for now let me simply say this: for Latour – and apparently Obama – the world is composed of actors. Progress towards a particular goal is made by convincing (“enrolling”) other actors to be “allies” which, once linked to and by you, bend their collective will towards your goal. As Clay Spinuzzi writes, “An actor-network is composed of many entities or actants that enter into an alliance to satisfy their diverse aims. Each actant enrolls the others, that is, finds ways to convince the others to support its own aims.”

Now, consider the following passages excerpted from Walter Jon Williams’ Ylesia:

Some have commented that these passages suggest that the Jedi Meld is used for communication, not convincing. But through the lens of Latour we see that the convincing comes before and during the communication. A Jedi Meld cannot take place before/until other Jedi have been convinced to enter into it, and thereafter it serves as a continuing site of contestation and cooptation. As I wrote in the comments below, it is C’baoth’s description of the Jedi Meld – “allows them to act as if they were a single person” – which implies, indeed necessitates influence: an assembled actor-network only holds together if all have been convinced to act as one. The linkages are made through not only the mind-meld but the other ontological actors which keep the linkages active from moment to moment.

When Obama writes that he “can’t do a Jedi mind meld with these folks and convince them to do what’s right,” then, what we should understand him to be saying is that he cannot simply enroll these actively hostile allies at a distance and convince them to move towards his goal any more easily than a scientist can straightforwardly enroll gravity to make him fly. Like obstinately hot coals beneath the feet of a soothsayer, the Republicans are, viz Obama, slippery black boxes which remain unopenable and unenrollable. The Jedi Meld method fails, and with it the network of possibility, not only for lack of midi-chlorians, but for a lack of available allies. 

Far from being a mistake, mixed metaphor, or slip-of-the-tongue, Obama’s extemporaneous invocation of “Jedi Meld” was precisely on point, simultaneously displaying his nuanced and considerable command of the finer details of both actor-network theory and the Star Wars: Expanded Universe. Instead of mocking him from the comfort of our replica X-wing armchairs, as nerds and citizens we should be honored and awed by a commander-in-chief who offhandedly deploys such concepts in the public discourse.

Edit 3/2/2013, 10AM ET: At the request of some in the comments I have tried (perhaps successfully) to further articulate the Latour connection and its significance. My apologies if it was (and/or remains) obscure: I’ve been distracted writing my thesis. In any case, if you’re interested in learning more about actor-network theory, you should read Latour and his interlocuters. If you are looking for a good place to start, I would personally recommend beginning with (at least) the first two chapters of Graham Harman’s Prince of Networks before moving on to Latour’s Reassembling the Social. Careful, though: once you see ANT, you can’t unsee it.

This entry was originally posted on the blog of the MIT Center for Civic Media.

Leave a Comment more...

Opening the Black Box: Analytics and Admissions

by on Jan.23, 2013, under general

Today my guest post Opening the Black Box: Analytics and Admissions went live on for Chronicle of Higher Education’s Head Count blog. I’ve been working on this post with Chronicle editor Eric Hoover for a few months. It shares some of the surprising (and, for admissions officers, disturbing) effects that web analytics pose for selective college admissions processes.

Here’s an excerpt:

One morning, shortly before we released admissions decisions for the Class of 2016, I received an e-mail from an applicant.


“No one from MIT checked my link included in the application,” it read. “I just checked my Google Analytics account. No visits from Boston [or] Cambridge. I am sure that I have been rejected. Feeling hopeless and helpless.”


Every year an increasing proportion of our increasing applications contain a link to some digital supplement: a project tumblr, a YouTube video, a Flickr album of artwork. The contents of those supplements often round out the student, adding dimensions that our very flat applications lack. But while we gladly accept the supplements because of the insight they add to the applicant, the analytics that often come embedded in the supplements also add insight into our process.


As admissions officers, we are accustomed to reading applications; now, applications are reading us.

You can read the rest here.

2 Comments more...

Beyond Accessibility: The Influence of E2E in Imagining Internet Censorship

by on Dec.02, 2012, under general

In 1984 three MIT researchers published a paper titled “End-to-End Arguments in System Design.” They advocated a “design principle that helps guide placement of functions among the modules of a distributed computer system.” Certain network solutions, they argued, could “completely and correctly be implemented only with the help of the application standing at the end points of the communication system.” They called this rationale the “end-to-end argument.”

Early ARPAnet engineers like Paul Baran, tasked by the RAND Corporation with building a communications system that could “survive” even if elements of it were unpredictably destroyed, had already developed the basic principles of building a decentralized network. Baran and his colleagues designed protocols, most prominently packet-switching and best-efforts routing, which could robustly navigate unreliable networks. Instead of sending each message once through a designated pipe, Internet Protocol sends redundant packets through several routes, trusting the servers “at the ends” to piece them back together. And instead of the preordained path through a centrally switched telephone network, Internet Protocol sent each myopic packet “hopping” from server to server, each time asking if it was “closer” to its destination. Meanwhile, the servers acted with the earnest goodwill of small town traffic cop, gently pointing each packet a bit further along its path. Internet Protocol generally distributed duties across many decentralized, rather than a few centralized, technological agents.


A traceroute follows a packet’s journey from MIT to Stanford

This broad suite of protocols and practices inspired end-to-end’s authors. Yet, despite its ambitious title, their paper was markedly modest in its prescriptions. The authors argued only that end-to-end was the most efficient means to execute error control functions within application file transfers. Their article did not address network latency, throughput, or other important considerations. Nor did the authors clearly define “ends” beyond applications in their argument, an important practical limitation since, as Jonathan Zittrain has argued, ends are indeterminate: what is an “end” and what is an “intermediary” on the Internet depends on one’s frame of reference.

As an argument, however, end-to-end crystallized dull practices into shiny principle. Tarleton Gillespie has traced the spread and influence of end-to-end as an idea across borders, disciplines, and industries. Despite – or perhaps because of – the difficulty in nailing it down to any precise technological arrangement, “e2e” became a model for understanding the Internet. It sanded the rough edges of implementation down to the smooth contours of an ideal: that “intelligence should be located at the edges” of a network. The terms “intelligence” and “edges” were rarely explained by those who invoked the argument. Instead, the rhetorical package replicated like a virus through the digital discourse, as advocates from varying backgrounds and with varying agendas deployed or resisted e2e in arguments over what the Internet ought to be. e2e wrapped up the Internet’s sprawling inconsistencies into an extremely portable model. As an interlocuter remarks in Latour’s Aramis: “Do you know what ‘metaphor’ means? Transportation. Moving. The word metaphoros, my friend, is written on all the moving vans in Greece.”

End-to-end, in other words, became a dominant and widespread configuration of the Internet, a robust technological and social construction complete with operating manuals explaining how it could or should work. The rallying cry of e2e – “keep intelligence at the edges!” – imagined the Internet as smart nodes connected by dumb pipes. The influence of this configuration guided many of the digital debates of the last decade. Network neutrality supporters, for example, fought to keep the pipes “neutral” – that is, “dumb” – so that the edges could remain “smart.” The e2e configuration implied and assumed a means of use: far-flung folks, as intelligent edges, conversing “directly” with each other through open pipes. But this configuration also suggested a means of subversion: if the Internet delivers information between smart nodes through dumb pipes, a potential censor can subvert it by silencing a node or blocking a pipe.

As a result, access to the pipe became a key way to conceptualize censorship. What passed for Internet censorship in the 1990s and 2000s was usually associated with blocks and filters imposed upon individuals “at the edges.” Electronic Frontier Foundation cofounder John Perry Barlow publicly worried about “origins and ends [of packets getting] monitored and placed under legal constraint.” The Berkman Center’s canonical books on web censorship – “Access Denied”, “Access Controlled”, “Access Contested,” – invoked this central metaphor directly in their titles.

Even those methods of suppression which arose organically from users followed this model of making ends inaccessible. Distributed Denial of Service (DDoS) attacks, for example, operate by launching a dizzying number of requests at a server sufficient to disable it. By demanding access over and over they ironically prevent it. DDoS’ are generated “bottom-up” by individuals, not imposed “top-down” by institutions, but they pursue the same effect, a kind of electronic heckler’s veto rendering a speaker (or at least, her server) inaccessible to her audience. Meanwhile, those battling censorship organized around e2e by creating alternate sites or paths to blocked edges. Projects like Tor tunnel under the walls erected by censors, while sites like Pastebin offer redundant locations where threatened materials can be found should the originals be removed.

The e2e configuration was further reinforced by earlier narratives of censorship and resistance. The ACLU campaigned vigorously against both removing books and blocking websites in libraries by appealing to principles of free access. Emphasizing the edges fit intelligibly within the American legal tradition of individual actors: the Supreme Court, in an early Internet case, favorably compared every networked individual to a pamphleteer, framing the pipes as the means of distribution. These social and legal traditions suggested stock heroes (the pamphleteer; the whistleblower; the learner) and stock villains (the autocratic state or corporate censor; the wild and deafening mob). They provided generative frameworks of compliance and resistance drawn from an analog world, which were then reinterpreted and layered back upon the digital.

The end-to-end argument, animated by liberal traditions, helped shape how censorship was understood, practiced, and resisted on the networked web. Most importantly, its configuration suggested accessibility as a central theme of use and consequently of subversion.

Yet perhaps we are moving beyond accessibility? I’ve described in other blog posts about how some emergent methods of suppression seem to orient, not around whether an object is formally accessible, but whether an object is effectively findable. These methods take advantage of the fact that websites are never really on the Internet, but rather through certain systems which mediate the person and the thing. Tools which link people to things; the pathways through which people travel to find things. The cavernous space of the Internet ends up collapsing to these tiny, two-dimensional conduits through which information actually circulates. Not the physical pipes, but the sociotechnical paths, the Ariadne’s thread which, when followed, connects us and, when severed, disconnects, such that we remain adjacent but oddly, invisibly unavailable.

This entry was originally posted to the blog of the Center for Civic Media.

Leave a Comment more...

Making Sense of the MOOC Hype

by on Nov.26, 2012, under general

A few weeks ago I was contacted by the managing editor at The EvoLLLution. He asked me to write two pieces for a special publication on the Internet and adult education. This article, an attempt to make sense of MOOCs, was originally posted to their site and crossposted with permission.

MOOCs are everywhere. They swarm and darken the sun. Those involved with education speak of them in tones hushed by dread (if they work for traditional institutions) or delight (if they wish to disrupt them). To hear them, talk, you would think MOOCs a surge rising up the seawall of some college citadel which it threatens to engulf and overwhelm.

But this dark vision of Massive Open Online Courses is a night terror, and, like all dreams, it follows the fantasy by eliding the facts. So let’s get specific. What, if anything, is new and different about MOOCs? What are their promises and perils for adult education?

Much of the buzz about MOOCs celebrates their Massive and Online aspects. But online courses, available at massive scale, aren’t anything new. The University of Phoenix enrolls over 400,000 students – more than the entire Big Ten – primarily through its online program. In fact, to the extent that “adult” education has come to mean something distinct from a “traditional” education, it usually refers to massive, online enrollment due primarily to the life constraints of the people who need it.

But what of the first “O” in MOOC? Isn’t one of the defining differences between, say, edX and University of Phoenix the fact that the first is open and the latter propriety? Well, it depends on what your definition of the word “open” is. As InsideHigherEd recently reported, all of the major MOOCs currently have restrictive terms of service compared to other “open” educational resources such as Wikipedia. Ian Bogost calls this “openwashing”: the practice of invoking a totemic word imbued with strong juju to appease apparently angry Internet gods.

For that matter, it’s not completely clear what the “C” – for “Course” – means in MOOCs. Are we talking about simply watching educational videos and reading papers? If so, then you can get just as good material for just as free in lots of places online (or, for that matter, at your public library). Perhaps they might design something more “interactive” to engage students? That might genuinely be a significant step forward, provided it can overcome the 97% attrition rate some early Udacity attempts have seen. But engaging interactivity remains a potential, not necessary or realized, condition.

The primary problem with the idea of Massive Open Online Courses, then, is that they aren’t meaningfully more “massive”, “open”, “online”, or “courses” than any of the other available adult education options.

So why all the hype over MOOCs? As Bogost has described: MOOCs are marketing. More specifically, starting or joining a MOOC consortium signals that a college “gets it”, “it” being an unarticulated but profoundly felt sense that the Internet will “disrupt” education. At the same time, the MOOC movement differentiates itself from operations like University of Phoenix primarily by its association with prestigious institutions like Stanford, Harvard, MIT, and UVA. In other words: fancy colleges join MOOCs because they are important; they are important because fancy colleges join them; they join them because they are important…and so on, and so forth, ascending from idea to reality via the bizarre bootstrap characteristic of self-fulfilling startups.

MOOCs are a hustle. But, with a few notable exceptions, they are a mostly harmless hustle. In fact, they might even be a good hustle, because they’re muscling in on the turf of one the worst hustles of all: the for-profit colleges which presently provide the bulk of adult education.

MOOCs may market themselves with some false pretenses, but for-profit colleges are scams all the way down. This language may seem strong but I believe that it is accurate. Sure, some people really do get decent educations through them, but, then again, some people really do get rich on Ponzi schemes. For-profit colleges enroll 12% of the nation’s students but produce 50% of its defaults while taking 75% of their money in the form of federal dollars. In 2010, 57% of students in for-profit schools dropped out while the CEO of one leading for-profit chain made $40 million. Meanwhile, the dysfunctional online curriculums are often no better than the worst of MOOCs, wrapping dull videos and readings in duller discussion forums.

Maybe MOOCs can’t compete with a interactive, interpersonal education offered by a quality brick-and-mortar institution. But they also don’t need to. They can, and should, compete with the existing online education alternatives available to adults. Because, especially for this market, the most significant word in MOOCs isn’t “massive”, “open”, “online”, or “course.” In fact, the most significant word isn’t even contained in the name.

That word is “free.” MOOCs can provide the liberty to learn as adults so often must. Without relocating. Without reorienting. Without unpaid, unpayable debt. If MOOCs can simply educate adults for zero cost as well as the expensive for-profit colleges upon which people presently rely, then their admittedly imperfect enterprise will still do real good in the world by chasing real evil from it.

2 Comments :, more...

The First Step is to Understand Your Audience

by on Nov.23, 2012, under general

A few weeks ago I was contacted by the managing editor at The EvoLLLution. He asked me to write two pieces for a special publication on the Internet and adult education. This article, on how to best use social media to speak to potential students, was originally posted to their site and crossposted with permission.

Quick: what comes to your mind when someone says “social media”?

If you’re like most people, you’ll probably first think of products and platforms. And it’s easy to see why. From Facebook to FourSquare, Twitter to tumblr, YouTube to Yelp, the apparently inexhaustible supply of web developers and venture capitalists produce a torrent of toys to use and amuse. Social media managers, in turn, are defined (and define themselves) by their ability to master multiple media, surfing the crest of the technological wave.

This mental model of what “social media” means is powerful, prevalent, and precisely backwards. It emphasizes the wrong word. The key to understanding social media isn’t understanding the media. It’s understanding the social.

Allow me to illustrate by intentionally invoking an unfashionable example: mySpace. Ask any “social media guru” what mySpace is (or was) for. They will probably say something like “it’s a place for people to hang out and share information with their friends.”

This is both correct and incomplete. You might as well ask what a living room is for. It’s a place for people to hang out and share information with their friends. But what people, and what information?

“Bands, goths, and porn stars, all talking about hookups and bling”, your imagined interlocutor might reply. Also correct. Also incomplete. In a 2007 talk, the researcher danah boyd described how, while conducting interviews for her dissertation, a group of midwestern youth told her that mySpace was “for” organizing Bible studies. These teens were using the exact same medium, with the same formal properties, as the bands, the goths, and the porn stars. But they were a fundamentally different community using (and understanding) it in fundamentally different ways.

Let’s shift to an another example more immediately relevant to the question of adult education. Pinterest is a rising star in part because people believe it to have cracked one of the toughest markets in social media: women. A post on TIME.com declares “Men Are from Google+, Women Are from Pinterest.” TechCrunch claims that Pinterest’s demographics skew disproportionately female.

Suppose these claims are true. Why might that be?

The answer, according to the business and tech press, is found in the formal properties of the medium. Forbes speculates that it is because “women trust other women in their circles more than anyone else.” BusinessInsider makes a beeline to to evolutionary psychology explanations. “Males get a hit of happiness-inducing dopamine to the brain upon the completion of a task whereas females get a continuous stream of dopamine throughout the task,” writes author Dylan Love. “In other terms, males are neurologically rewarded for hunting while females are neurologically rewarded for gathering. As a social pinboard site, Pinterest is the perfect platform for gatherers.”

This is pure bollocks. Who uses Pinterest has nothing to do with the formal properties of Pinterest itself and everything to do with the people who are using it. Tapiture is technologically indistinguishable from Pinterest yet is almost exclusively male. Why? Probably because of significant community overlap with TheChive.com, a male-gazing hub featuring funny pictures and pretty girls. And indeed, in the U.K. at least, even Pinterest itself is mostly male. So much for dopamine streams.

The point I am trying to make is that social media are constituted by the communities which preexist and animate them. The formal properties of the media – or even the media themselves – are, at best, second order concerns.

Here’s why this matters:

The misplaced emphasis on products and platforms isn’t just an epistemological error. It actually interferes with the ends to which social media are employed. It’s not that each new product or platform overpromises and underdelivers (though that happens, too). It’s that they seduce and overwhelm. For any need, no matter how specific, there is or soon will be a corresponding service. Each, on its own, seems a useful, even indispensable, solution to help meet or facilitate some important goal.

But, in the aggregate, the sheer volume of solutions develops a debilitating gravity. For Silicon Valley, frequent failure is cheap and productive. For the communications professional, however, trying to master all these media incurs cognitive costs with compounding interest.

Instead, the key to a successful social media strategy is focusing on the community. Identify your audience. Figure out where, and through what, they are already interacting. Find someone who can relate authentically with your audience and hire them. Then, let them just interact as members of that community customarily do in a given medium.

There is a reason that top startups like Kickstarter have positions like Director of Community Support. It’s because they know that no shiny bells or whistles can replace quality content and conversation. The bad news is that you still have to create quality content and conversation. The good news is that you don’t have to try to keep up with Silicon Valley. All you need to do is understand your audience.

Leave a Comment :, , more...

The Conservatism of Google

by on Nov.13, 2012, under general

Google, in its mission, famously aspires to “organize the world’s information and make it universally accessible and useful.” But a mission is a mission, not a modus operandi. Examining what Google does, rather than what it aims to do, reveals the surprisingly conservative role it actually plays in the world.

In principle Google imagines itself a progressive, even revolutionary, organization, which through information technology brings about change, specifically change in line with liberal democratic freedom. The ideas of cyber-utopianism – that certain technologies are liberating in a particular kind of way, and should be deployed to achieve those ends – constitute the conceptual foundation upon which the public myth of Google rests.

In practice Google behaves much more conservatively. By this I do not mean that it has a particular reactionary political or social agenda. Instead, I mean that Google generally respects rather than repudiates traditions and institutions, takes the “is” as the “ought”, and by doing so perpetuates and legitimates the existing order in a given context despite (and often at the immediate expense of) its mission.

Often this conservatism manifests through the simple everyday practices of its engineers. Earlier this semester I attended a talk by a lead developer for Google Products who described the technical challenges of running a shopping aggregator online. Afterwards a student raised his hand. He was from Wyoming, he said, and he had long relied on Google Products to buy guns and ammunition. However, since May, Google had prevented him from doing so, despite the fact that he was legally licensed to own and operate them. How and why did Google make that decision?

The engineer explained, with a matter-of-fact air, that Google didn’t want to sell anyone anything which they may not be legally be able to have in their locale. Their guidelines were not only law, but policy: specifically advertising policy. If AdWords wouldn’t advertise it in a given location, then Google wouldn’t sell it there.

Judging by this statement our engineer seems to think of law and policy much like he thinks of coding libraries: neutral tools, facts, and standards which he can import and reference to do work for him. After all, why should he reinvent geographically specific distribution limitations any more than he should reinvent the while loop? This rationale is perfectly reasonable, profoundly conservative, and conceals messy regulations behind clean code.

Sometimes Google articulates its support for order, as when it began blocking ThePirateBay from appearing as prominently in search. A Google spokesperson defended the move technocratically, saying that “this measure is one of several that we have implemented to curb copyright infringement online.” No longer does Google present Search as an impartial exercise of algorithmic objectivism (query, pilgrim, and the truth shall be revealed). Instead, having embraced an editorial intermediary role, Google submits to, reproduces, and further legitimates the dominant legal and cultural paradigms.

Some may see this as Google simply abiding by the law. That may be so. My point is it’s hard to reconcile such a method with the Google mission. Whereas Google’s mission is progressive and empowering (it will universally distribute the tools of information so the people may do what they will), Google’s practice is conservative and paternalistic (…unless you might do something unacceptable, in which case you’re out of luck).

In the gap between Google’s principles and practice we find again the answer to the question posed by Tim Wu and Jack Goldsmith in Who Controls the Internet? As Wu and Goldsmith argued in 2006, early cyber-utopians such as Johnson & Post and John Perry Barlow were admirably aspirational but ultimately incorrect in their belief that, as Johnson and Post put it, the Internet would be “[separated from] doctrine tied to territorial jurisdictions [such that] new rules will emerge.”

Instead the precise opposite has occurred. Whatever its transformative effects may be, the Internet has not broken down the walls of country, culture, and law. To the contrary, it has been subjected to their inexorable emergence into a new medium. To borrow terms from Evgeny Morozov, this “realist” or “agnostic”, rather than utopian, understanding is the one borne out by what Google actually does. One need look no further than Google’s decision to block “Innocence of Muslims” from YouTube in Libya and Egypt after the recent embassy attacks. The complex and potentially life-altering geopolitical considerations which led to that unusual move may have been reasonable, understandable, even “good”, but above all shrewdly realist, not idealist, in character.

I don’t mean this essay as an attack on Google. I think Google is generally run by smart, well-intentioned people operating under incredibly difficult real-world constraints. My purpose is rather to reveal how Google understands and interacts with those constraints and what the sociopolitical effects of those interactions might be. The answer, unfortunately, seems likely to disappoint anyone still hoping for a truly liberating information revolution to emanate magically from from the wizards of Mountain View.

This entry was originally posted to the Center for Civic Media.

Leave a Comment :, , more...

What Does “Many-to-Many” Mean? Some Thoughts On Social Network Structures

by on Nov.10, 2012, under general

In my last post I wrote about links and objects. Specifically, I argued that, while traditional censorship (both analog and digital) had focused on the object to be censored (a book, a painting, a website), some emergent tactics of online censorship instead function by erasing or making uninteresting paths which lead to those objects.

I wrote:


This view imagines objects sitting, not “out there” in the open world, but rather entangled at the intersection of the routes which lead to and away from it.

Since then I’ve been thinking a lot about this view which I am developing, because it changes some of the other ways in which I conceptualize the collection of social practices and technological protocols we call “The Internet.”

On the one hand, it’s painfully obvious to observe that websites have links to them. Duh. One could argue that the defining feature of the Internet is the fact that it is, well, a network. Not just a network, for that matter, but a network of networks, with its single most salient feature being that it is composed of – constituted by – the links which connect the objects to each other through people.

On the other hand, this observation complicates another common concept of the Internet, one which I myself subscribed to unthinkingly until quite recently.

A dominant theme in Internet discourse is that it affords “many to many” communication. This idea has been expressed by many scholars whose work has influenced me. Clay Shirky uses many to many – or “group conversation” – dozens of times in his excellent Here Comes Everybody. He describes the fundamental feature of social software as allowing community to “now shade into audience; it’s as if your phone could turn into a radio station at the turn of a knob.” Before his book, Clay contributed, along with luminaries like danah boyd and David Weinberger, to a blog called “Many-to-Many.” Yochai Benkler wrote that the promise of the Internet would allow “the unmediated conversation of the many with the many.”

There’s even a Wikipedia page for many-to-many, which calls it “the third of three major Internet computing paradigms.” The entry differentiates it from “one-to-one” (like FTP and email) and “one-to-many” (like websites) communications as such:


With developments such as file sharing, blogs, Wiki, and tagging, a new set of Internet applications enable:

1) people to both contribute and receive information.

2) information elements can be interlinked across different websites.

This kind of Internet application shows the beginning of the “many-to-many” paradigm.

For a long time, I’ve taken for granted that what we commonly call social software does in fact afford many-to-many communication. Now I’m not so sure.

Part of the reason I’m confused is because there are conflicting definitions. A good example of this is email. Is email one-to-one or many-to-many? The IETF categorizes it as one-to-one, explaining that “we define one-to-one communications as those in which a person is communicating with another person as if face-to-face: a dialog.” Meanwhile, Clay Shirky celebrates email as one of the early successes of many-to-many communications because of how it facilitates “group conversations.”

But the main source of my confusion – and maybe the conflict above – is this: I’m not sure what many-to-many is supposed to mean.

What does it mean for many people to communicate to many people? How is it distinct from, say, many simultaneous one-to-one connections? How would it work?

Consider Twitter, a platform universally celebrated as many-to-many communication, at least by those who celebrate such things. Many individuals can send tweets to one person, to many people, or to no one (meaning everyone) in particular). But does that make it many-to-many?

Bob and I are both following Alice. When Alice tweets something out, we’ll both see it, and we can both respond to her, to each other, or to both. But what about that is that many-to-many? If I read Alice’s tweet and then send a reply to her, it’s still one-to-one. And even if I reply to both Alice and Bob, I’ve sent one message, but it really is copied and sent twice, one-to-one to two different people. And so on, and so forth, for whatever n people involved in the exchange.

So what is many-to-many supposed to mean? I think it has something to do with a sense of the public sphere. Benkler, Shirky, and many other Internet scholars have been heavily influenced by Habermas. The Internet has been described, by those who follow this tradition, as a sort of public square, or town hall, or coffee shop online, where groups of people can come together and engage in discussion.

Suppose if, instead of tweeting, Alice, Bob, Carol, David, and I are all seated at a cafe talking politics. This would appear to be, as Yochai might say, “the unmediated conversation of the many with the many.” But is it actually?

I don’t think so. Instead, if Alice is talking (she is apparently quite the chatterbox), she’s actually holding several, simultaneous one-to-one conversations. Bob, Carol, David and I are all listening, but we are all taking different things from it, based on our own understandings and interpretations, the references we get, our view of the world. This is true whether Alice is addressing an audience of ten or ten thousand (one-to-many dissolves as well here). And if we’re all talking to each other, the cacophony which arises is composed of several simultaneous one-to-one conversations.

Shifting back to the digital world, when I’m on Facebook, or Twitter, or tumblr, I can’t communicate “many-to-many” just because all of my friends are “there” too. Suppose Alice, now tired of talking, posts a photo to Facebook, and the rest of the crew comments, conversing with each other about it. I think most M2M folks would consider this a clear cut case, but it seems to me to be more accurately a series of simultaneous point-to-point connections overlaid in the same “space.” My comment on Alice’s photo is point-to-point to Alice and another point-to-point to Bob and another point-to-point to Carol.

My points here is not to belabor the hermeneutics of textual interpretation. On the Internet, the implications of everything being one-to-one are quite important and manifest in very real ways.

Suppose Facebook decided to invisibly intercede in my comment such that it appeared to Alice but not to Bob or Carol. This is, in fact, exactly what tumblr does when it “ghosts” troublesome users who have been repeatedly reported for spamming and harassment. Their posts remain viewable to their followers – but not to anyone else. Notice that these trolls aren’t banned from tumblr (that is, kicked out of the coffeehouse). Instead, some of the links which connect them to other users (in simultaneous, overlaid one-to-one ways) were invisibly severed. Some folks have dropped out of the many relative to some others of the many but not necessarily everyone. This doesn’t many any sense unless, instead of some big unmediated many, there is actually a constellation of individuals entangled in an enormous net – perhaps a world wide web – of connections.

What if many-to-many is an illusion? A construct supported by an analog ideal
but demonstrably inoperative and inoperable (for both intellectual and technical reasons) online?

It seems to me that such a realization would properly shift our emphasis away from objects (like photos and websites) and even social software (like Facebook or Wikipedia) and onto links: that is, the connections which constitute the network, arranging avenues of passage between people and through things. If everything is one-to-one, then building (or maintaining, or preserving) the routes which connect the points seems to me like an important – and previously underappreciated – priority.

This entry was originally posted to the blog of the Center for Civic Media.

Leave a Comment more...

The Ark in the Archives: Toward a Theory of Link-Oriented Antiepistemology

by on Nov.06, 2012, under general

Imagine an expressive object. A book, a painting, or a website will do. This object, in literary terms, constitutes a text, with a meaning to be interpreted by readers.

Assume that a given audience would like to read this text (that is, the text is not already subject to some form of internalized, repressive foreclosure). However, you, for any number of reasons, wish to intercede and prevent them from reading it. How might you go about doing this?

One way to do it would be to act on object itself: that is, to subject the object to some form of overt cultural regulation. If it is a painting, you might ask a museum to remove it from its walls. If it is a book, you might demand a library remove it from its shelves, or even organize a burning in the town common. If it is an embarrassing or dangerous government secret, you might classify it and keep it under lock and key, available to only those with the proper clearance.

Professor Peter Galison’s article Removing Knowledge discusses the processes and practices of classifying documents. Galison notes that, if one counts pages, the volume of classified material produced each year far outstrips that entered into the Library of Congress. “Our commonsense picture may well be far too sanguine, even inverted,” Galison writes. “The closed world is not a small strongbox in the corner of our collective house of codified and stored knowledge. It is we in the open world—we who study the world lodged in our libraries, from aardvarks to zymurgy, we who are living in a modest information booth facing outwards, our unseeing backs to a vast and classified empire we barely know.”

Establishing this empire, Galison notes, was no trivial task. There was a method to the muzzling. He traces the long and (perhaps ironically) well-documented history to how classification schemes developed: carefully, thoughtfully, with almost academic rigor. Indeed, the intellectual character of classifying information inspires Galison to compare it directly to the philosophy of information. He writes: “Epistemology asks how knowledge can be uncovered and secured. Antiepistemology asks how knowledge can be covered and obscured. Classification, the antiepistemology par excellence, is the art of nontransmission.”

Classification, as antiepistemology, orbits the objects it classifies. We might, at the risk of compounding jargon, even call it an object-oriented antiepistemology. Depending on its status, the thing to be classified – a covert intervention, a nuclear equation – then becomes regulated in ways prescribed by the rubric which classified it.

This is a powerful antiepistemology. But are there other antiepistemologies? Other modes of intervention which might make texts unavailable to readers?

Suppose, instead of removing or destroying the object, one attempted to erase avenues through which the object is available. This view imagines object sitting, not “out there” in the open world, but rather entangled at the intersection of the routes which lead to and away from it. As LaTour said at a GSD lecture in 2009, “…we always tend to exaggerate the extent to which we access this global sphere…There is no access to the global for the simple reason that you always move from one place to the next through narrow corridors without ever being outside.”

How might this work in practice?



Indiana Jones: Raiders of the Lost Ark concludes with the Ark, enclosed in an anonymous box, being wheeled into an enormous archive. The scene implies the box will here be kept, secret among secrets, until “top men” begin their work.

Suppose you’re Indiana Jones. Having witnessed the awesome and terrible power of the Ark, you want to prevent even these “top men” from accessing it. What do you do?

You could focus on the object and try to remove the Ark from the archives. But stealing the Ark seems like a difficult task, even someone as resourceful as Indy. His efforts might evenbackfire by alerting the government to his efforts, which, its attention raised, would double-down on securing and preserving the Ark. And even if he did manage to sneak it out, what would he do with such the very dangerous object suddenly in his sole possession?

But Indy could also do something else. If he is worried about “top men” finding the Ark, rather than removing it he could render it unfindable. He might, for example, find the index of boxes and simply edit or erase the Ark’s entry. In an archive of sufficient size and complexity, it seems likely no “top man” would know where or how to find it. In fact, most people wouldn’t even know it had gone missing. The Ark could be hidden in plain sight, simultaneously accessible and unavailable. The routes of passage leading to and from it would not be closed or blocked. Instead, by virtue of the altered indices, they would simply be made to seem boring or unusable to potential travellers.

If classification is object-oriented antiepistemology, then we might call this link-oriented antiepistemology. Link-oriented antiepistemology functions, not by removing or conspicuously blocking access to objects, but by erasing or making uninteresting the avenues which lead to them.

Object-oriented antiepistemology is common to the Internet. Every time a country or company erects a filter or a block, or every time Anonymous fires the LOIC at an underequipped server, they practice censorship by attacking objects. I would argue, however, that instances of link-oriented antiepistemology are emerging on the Internet.

In summer 2011, as the heat and the hardship both beat down on Britain, anti-austerity activists proposed a general strike. They began organizing and promoting the event, relying, as so many groups have, upon digital platforms to spread the word. They set up a website at J30Strike.org and shared links to it on Facebook, trusting that referral traffic would amplify their message across their networks.

Their trust was betrayed when, ten days before the strike, Facebook began blocking all links to J30Strike.org. Attempts were met with a message which said the post “contains blocked content that has previously been flagged as abusive or spammy.” (Emphasis mine). Then, with relentless, recursive efficiency, Facebook blocked links to sites which linked to J30Strike, including blog posts informing other activists of the original embargo. The site was suppressed, and then its suppression was suppressed further.

J30Strike.org underwent a Rumsfeldian transformation, becoming an unknown unknown within the world of Facebook. It’s tempting to say that J30Strike disappeared down the memory hole, but in fact almost precisely the opposite occurred. The original object – the website – remained intact. What vanished were important avenues through which it could be found.

The story of J30Strike could be an instance of link-oriented antiepistemology. Social media appear to connect users directly to their friends such that they may share stories. But in fact, these apparently direct connections have a highly contingent character. What one can see, and what others are allowed to see, depend upon a complex and invisible confluence of forces, many of which were beyond any individual’s control.

If classification regimes are instances of object-oriented antiepistemology, than we might think of J30Strike – or the Digg Patriots – as instances of link-oriented antiepistemology. The objects (J30Strike.org; a DailyKos post about climate change) remain “visitable”. But few visit, because certain tactics have made the indices or routes which lead to them disappear or become utterly uninteresting.

One of the most fascinating and dangerous characteristics of this form of suppression is its silence. This mode of suppression suppresses its own operation. When an object is (often loudly) removed, the knowledge that something is forbidden engenders intense, taboo-driven interest in revealing it. But the erasure of avenues operates invisibly. It rarely betrays its own existence. It requires no clearances, vaults, or bonfires. And it is all the more effective and insidious for it.

This entry was originally posted to the blog of the Center for Civic Media.

Leave a Comment :, , , , more...

A Modest Proposal: Sandy, Tontines, And Disaster Markets

by on Oct.29, 2012, under general

It is a melancholy object to those in our society when they see the streets flooded, with townspeople refusing to evacuate ahead of a terrible hurricane and thus recklessly risking death.

I think it is agreed by all parties that such prodigious suffering is unnecessary and avoidable, and that whoever could find out a fair, cheap and easy method of saving these people would benefit all. I shall now therefore humbly propose my own thoughts, which I hope will not be liable to the least objection.

It is understood broadly that we live in an age of wise crowds; that we humbly recognize all of us are smarter than any of us. The rise of prediction markets, most especially the proposed Policy Analysis Market for political developments in the Middle East, suggests a course of action oriented around these precepts.

The inefficiencies of centralized planning taint the top-down evacuation orders emanating from government bureaucracies; it should come as no surprise that citizens quite rationally disobey them. Where government fails – that is to say, inevitably and always – we should replace them with markets. So long as we get the incentives right, as social entrepreneurs like to say, we can harness the engine of self-interest for both the individual and collective good: capitalism with a human face.

So how might we make a market in disaster preparedness?

The core, I think we can all agree, would be a tontine. We could distribute credits equally among citizens, redeemable for cash should they survive a given weather event. Such a reward would incentivize wise behavior more than any notice from a centralized weather service.

Of course, the value of the credits should increase as the number of survivors decrease. It is no great feat to merely follow the herd. Instead, those who made better decisions on equivalent information should be rewarded for their efficient use of information. The (remaining) market will then quickly move to incorporate this information and improve everyone’s outcomes in the future.

Some might object that such a system incentivizes living in dangerous areas. But physical safety need not be an impediment to wealth. As a matter of social justice, distant observers should still be able to benefit from others who wade weeping through waste-deep filth as everything they’ve known and loved burns down behind them. Thankfully, advances in the financial engineering have created sophisticated instruments through which such broadly distributed benefits may be achieved.

For example, it would be trivially simple to create derivative products of these tontine contracts. Even those far removed from the affected areas will be able to wager on lives of the victims, who could be organized into different risk tranches depending on, say, their location relative to known floodplains. Investors could rationally incorporate all of this information into their plans and benefit from their wise decisions.

The wonder of markets is that they serve all of society. A Disaster Market would not only potentially enrich some of its victims and speculators, but would also provide practical price signals for those merchants hoping to enter the market. Disaster Markets, in aggregating all known information about risks, would predictively inform emergency response teams where their services were most highly valued. These teams, following the admirable example of Crassus in antiquity, would themselves be private organizations, arriving quickly to the scene and then competing among themselves for the services of the survivors, who themselves would aggressively bargain over the rescue of ruins which were their lives.

Hurricane Sandy has revealed the inadequacies of our current social systems. As a social entrepreneur, I aim only to harness the power of markets to avoid such a perpetual scene of misfortunes. I profess, in the sincerity of my heart, that I have not the least personal interest in endeavoring to promote this necessary work, having no other motive than the public good of my country. For I live on high, dry ground, in a well-built house, and am thus stand to gain little compared to those fortunate few with the opportunity to profit from their pain.

This entry was originally posted on the blog of the Center for Civic Media.

Leave a Comment :, , , , more...