Image via ToddPresner.com
In 2009, Todd Presner, head of UCLA’s DH department, along with scholars Jeffrey Schnapp, and Peter Lunenfeld, published the Digital Humanities Manifesto 2.0, an updated version of their earlier manifesto, redrafted to include feedback from commenters and practitioners. Here’s how that document articulates the what DH has done, and what it has the potential to do:
The first wave of digital humanities work was quantitative, mobilizing the search and retrieval powers of the database, automating corpus linguistics, stacking hypercards into critical arrays. The second wave is qualitative, interpretive, experiential, emotive, generative [emphasis original] in character. It harnesses digital toolkits in the service of the Humanities’ core methodological strengths: attention to complexity, medium specificity, historical context, analytical depth, critique and interpretation. Such a crudely drawn dichotomy does not exclude the emotional, even sublime potentiality of the quantitative any more than it excludes embeddings of quantitative analysis within qualitative frameworks. Rather it imagines new couplings and scalings that are facilitated both by new models of research practice and by the availability of new tools and technologies. Interdisciplinarity/transdisciplinarity/ multidisciplinarity are empty words ( ) [sic] unless they imply changes in language, practice, method, and output [emphasis original].
Though arguably outdated as a manifesto, this language can, I think, help us make sense of what DH does, or can do, specifically as applied to the study of literature.
The manifesto describes the “first wave” of DH work as quantitative—”mobilizing the search and retrieval powers of the database, automating corpus linguistics, stacking hypercards into critical arrays.” In other words, maximizing the potential of metadata and algorithms, using computers to organize huge amounts of data and otherwise do things with numbers that are difficult, or impossible, for humans to do.
For an example of this kind of work in the field of literature, we might look at Lisa Rhody’s paper, published in the winter 2012 edition of the Journal of Digital Humanities, on “Topic Modeling and Figurative Language.” Rhody’s premise, if I have understood it correctly, is that, due to cultural and other biases, we have possibly failed to ask important questions about the ekphrastic tradition in poetry, questions that might uncover different ways poets “enter into, disrupt, or perpetuate the ongoing discourse associated with the tropes that typify ekphrasis.” In order to get at those questions, Rhody “topic modeled” 4,500 poems by applying Latent Dirichlet Allocation (LDA), a generative statistic model, looking for patterns of language. Her hope was that LDA would be able to recognize figurative language within the poetry, rather than just keywords, therefore allowing her to analyze an enormous corpus of ekphrastic poems—obviously, a much larger sample than she would be able to analyze on her own—and to ask discovery-oriented questions that had not, as yet, been interrogated. The majority of her paper is devoted to explaining her methodology, why she chose LDA, and how it worked.
You don’t need to understand all of that, however, in order to get that Rhody is using a computer to analyze literary data. And as the manifesto points out, this seems to be the first and most straightforward application of DH to the study of literature. It’s not a new idea. Father Roberto Busa, a Jesuit priest and linguistics scholar, was probably the first to apply “machine-generated concordance” to the study of literature when he began indexing Saint Thomas Aquinas’ work way back in 1949. Since then, the use of data-processing tools, developed primarily for other fields, have been applied to the study of literature with increasing sophistication. Rhody’s paper is just one example.
The manifesto’s more urgent call to action, however—the cry for qualitative, interpretive, experiential, emotive, generative work within the digital humanities—has been more difficult for me to identify at work within the field of literature. The manifesto writers seem to be looking for something grand, something disruptive. An entirely new way to think about the study of humanities, including literature—a new theory.
“Timeline of Literary Theory” graphic created by Karen Nelson, discovered via a Google search
But DH does not seem to function as a theory. As an experiment, I asked several DH scholars if they could add digital humanities to a timeline of literary theory that I found through a Google search and have been using as a study aid in my Lit Theory class (above). Matthew Kirschenbaum, Matthew Jockers, and Douglas Eyman generously responded to my email—and all of them said, in so many words, that what I was asking them to do was impossible.
Professor Eyman made a few suggestions for places digital scholarship might overlap with theory—the Post-Colonial DHers; computational linguistics as a DH method aligned with semiotics; blogs and social networks of readers as a DH branch within reader response theory—but ultimately said he ran into a problem attempting to add a self-contained DH bubble to my chart. The issue, as he said, is that “DH doesn’t have a theory (or even a set of theories),” adding that “DH is also kind of multidisciplinary…so even attempting to align a not-well-defined approach to a specific disciplinary view of particular objects of study presents some serious challenges.”
Professor Kirschenbaum agreed, explaining, “Because ‘digital humanities’ is not itself one thing or one event I don’t think it lends itself to inclusion in your timeline in a modular fashion. I see it instead as twisting and torquing through major theoretical movements in the manner of Marcia Bates’ discussion of meta-disciplines.”
“Timeline of Literary Theory” graphic with Matthew Jockers’ annotations
You can see what Professor Kirschenbaum meant by “twisting and torquing” in Professor Jockers’ annotation of my graphic. Though Jockers agreed with everyone else that DH isn’t a unified theory or school of criticism (he wrote that he considers it “a catchall term that describes so many varied practices that it is ultimately useless for the sort of timeline…outlin[ed] here”), he was game enough to play along and attempt to add at least his own work, which he calls “computational criticism” to the chart. Here’s his explanation of his thinking on that:
“I place this “computational criticism” in a branch of theory that has two parents in your flow chart. The important grand parent node is formalism, specifically certain Russian formalists, and the key parent nodes are New Criticism and Structuralism (especially as related to linguistics). From those parents, a new line is born that combines elements of New Criticism and Structuralism and then runs parallel (and sometimes intersects) with cultural studies, reader response and even, on occasion, some of the orange nodes on the far right of your chart.”
Looking at Jockers’ annotations and reading his analysis, an image emerges—digital humanities as a giant, tubular waterslide, “torquing and twisting” through the history of literary theory, applying new, digital methods to established ideas, spitting out a new kind of product with a giant splash.
And this is, maybe, the key. Maybe the way to think about DH is as a collection of methods that leave room for a little magic—qualitative, interpretive, experiential, emotive, generative work—in the application.
More on that in my next post.
This post is part of a four-part series (links below) that seeks to explain the role of Digital Humanities in literary study, written as an alternative to a traditional paper in my Lit Theory class. I welcome (encourage!) feedback on my work, including corrections to my research and/or interpretations. I hope you will consider commenting if you have something to add to the the ideas presented here. —KF
- Might As Well Call It the Digital Humanities
- Really, Though, What Is Digital Humanities
- DH: Theory vs. Practice
- The Author Is Dead. Long Live the @uthor!