Getting Meta: Digital Literacy Is As Much About Context and History As Skills


Photo by Doug Belshaw.

7.13.11 | Digital literacy is generally defined as a set of immediate skills to acquire, whether it’s knowing how to construct a short online video or understanding what is and is not appropriate to post on Facebook.

A series of recent articles, however, suggests that we should expand what we think of as digital literacy to include a broader, meta-understanding of the context and history of the digital tools and media that quickly become essential, how-did-we-ever-live-without-it parts of our lives. We might consider, for example, how technology and information become popularized, how knowing the history of Web 1.0 might help us understand the limits of Web 2.0, or how, even in the age of social media, more credibility is given to traditional forms of information distribution.

At DMLcentral, Doug Belshaw, inspired by the buzz around Google+, provides a fascinating analysis of how the very small percentage of “innovators” and “early adopters” of a particular technology often have a disproportionate influence over the “customs” that will come to rule that particular technological space—and how that influence can distort the very nature of literacy and diversity and construct a new type of digital divide.

None of this would be problematic if there was one way to be literate, one way to ‘be’ in the world. But, of course, we know that literacy is a multi-faceted and inherently social (rather than an individual and psychological) pursuit. Diversity matters. If the tools with which we ‘read’ and ‘write’ in digital spaces have been mediated by those going before us, that can affect how the those in the mainstream think and act. If an important part of our identity can be shaped by these tools, then the structure of social networks (and especially emergent ones such as Google+) should matter to us. Technology is never neutral. It is not only made by human beings for a purpose but co-created by innovators and early adopters who have their own biases. Part of being digitally literate means being able to reflect on that and either accept it (as they are part of your tribe) or co-construct an alternative way of being.

Virginia Heffernan at The New York Times begins a recent column by ruminating nostalgically about her experience with old-fashioned online message boards—specifically, in her case, when she found solace and community on a fertility message board as she and her husband were trying to conceive their first child.

Heffernan writes that we have lost something significant in our transition from the boards to more Facebook-like social media:

They were built for people, and without much regard to profit. How else do you get crowds of not especially lucrative demographics like flashlight buffs (, feminists ( and jazz aficionados ( By contrast, the Web 2.0 juggernauts like Facebook and YouTube are driven by metrics and supported by ads and data mining. They’re networks, and super-fast — but not communities, which are inefficient, emotive and comfortable.

But Web 2.0 won the battle of social media years ago:

Lori Leibovich, the founder of Kvetch, the message board of which the fertility board was a part, told me she thought message boards were becoming “almost quaint, which I find sort of sad.” She likened boards like Kvetch to “group therapy,” adding that “conversations ‘stay in the room’ and you’re invested in the individuals in the group. Social networks are about broadcasting. More about your persona than it is about you as a person.”

Writing at MediaShift, Devin Harner and Alexa Capeloto, both assistant professors of English at John Jay College of Criminal Justice/City University of New York where they direct the journalism program, are also concerned about our relationships with older media—in this case, traditional journalism. Ironically, however, it is their students who are holding onto the tradition and reluctant to see Web 2.0 as a legitimate journalistic space:

When our recent crop of digital journalism students were asked to create their own journalistic blogs and market their content through social media, they were uncomfortable. Although they habitually post to Facebook, the thought of actually reporting on a topic and putting their work into the public domain as journalism, versus a personal narrative of candid pictures and random Friday night ephemera, was scary.

In fact, a few students said that they didn’t see blogs as journalism, because anyone could do them. They were in class to learn about reporting and writing—capital-J Journalism—and not to repeat what they already do on their own time.

The authors believe it’s their job to expand students’ conception of journalism:

In some regards, it’s refreshing that students already know what we think we’re supposed to teach them. There is a difference between what they post on Facebook and what they see on CNN. Not anyone can do journalism, or at least do it well. It does take time and training and some hard lessons to become responsible, thoughtful purveyors of information.

But no one ever gets to the point of responsible purveyor if they are too scared to test their capabilities as reporters, or too conservative as readers to trust beyond the mainstream media. If students can’t see that there’s journalism lurking in the everyday things they do with information, especially now that technology has made such things constant, instant and ubiquitous, then we truly do have reason to worry about the future of journalism.

Depending on the context, it seems, both embracing the future and hanging onto the past can be a radical act.

Leave a comment

Comments are moderated to ensure topic relevance and generally will be posted quickly.

Commenting is not available in this section entry.