Over the last couple of years I have had a private blog on which I’ve posted numerous thoughts and methodological experiments, in part because it was an effective way of keeping track of them. The blog was originally created to foster communication for a collaborative project with Mike Drout, but I quickly developed a category for discussion of Lexomics, the computational stylistic analysis technique he has been developing at Wheaton College. Many of my posts ended up being contributions to the development of the lexomics method (as well as similar methods, such as topic modelling).
For some time now, I have been considering making these posts, incomplete as they are, available on my public blog. This is in part an endorsement of trends in the Digital Humanities: the increasing emphasis on open access and scholarship as process, as well as product. But it is also due to a realisation that many of speculations were moving beyond how to achieve certain results using lexomics and into broader questions of interest to the DH community. My first thought was simply to re-post materials from the private to the public blog, but, particularly because of the latter development, I have decided to make this an opportunity to re-work the material as an excuse to deepen my thoughts about the significance of lexomics and similar forms of research in the Humanities more broadly.
For some time now, digital humanists have lamented the lack of impact of the Digital Humanities on the wider community of Humanities scholars. This is perhaps unfair for such a young field, but the accusation is often firmly levelled at the sub-area of the field formerly referred to as Humanities Computing—into which lexomics and related methods fall quite comfortably. A better name for this type of approach might be “computational humanities”—which would imply investigating the Humanities by means of computation (typically statistical analysis). We shouldn’t ignore the fact that the ability of Humanities scholars to engage in such work came into being at exactly the time when the Humanities were turning away from traditional forms of linguistic analysis and increasingly towards the analysis of theoretical categories. It is no wonder that computational approaches had little impact on the broader scholarly community (though we must not dismiss its impact on other disciplines such as natural language processing). Now that processor power, memory, and the predominance of the digital in our lives have grown so dramatically, I would argue that the computational analysis of language and text is poised to have a much greater impact. Already, I can say that half my publications of the traditional variety employ DH techniques, and I haven’t really had anyone questioning my methods.
That said, I don’t think we should declare the Digital Humanities as inevitable, that all Humanities will be done this way in the future. There are many challenges to negotiate. I’m very mindful of the angst over the status of DH encapsulated recently in Andrew Prescott’s recent lecture Making the Digital Human: Anxieties, Possibilities, Challenges. I can’t agree with Prescott’s accusation that the scope of enquiry in the Digital Humanities is too narrow and service-oriented, or his implication that big data and digital native objects are a more appropriate area of study than the works of pre-digital era. But, as Prescott reminds us several times, his views are a response to the institutional framework of the British university system. That system is now utterly different from the one that educated me in the 80s and 90s, so I have to take him at his word. From across the Atlantic, the situation is more diverse. I do feel that there is a tendency in the United States for the DH community to be dominated by people in non-faculty positions. I don’t begrudge them this; but unless faculty can bring the same forms of knowledge and enthusiasm to the Digital Humanities, the alt-ac community alone may not be able to it the status of a well-defined or independent discipline (assuming for the sake of argument that either is a worthy goal). We do need to define a scholarly programme that co-exists with the issues we are studying in our various sub-disciplines of the Humanities.
To this end, it has been suggested that digital humanists begin to ask “fundamental questions”, as Claire Clivaz put it in a recent blog post, What does it mean a “fundamental research question” in DH? Personally, I’m a bit sceptical of this, perhaps because the word “fundamental” to easily evokes Chaucer’s Host’s words to the Pardoner: “ Thow woldest make me kisse thyn olde breech / And swere it were a relyk of a seint, / Thogh it were with thy fundement depeynt.” Indeed, too often the answers supplied to big questions have been mere rhetoric: spun words that turn out to be, as were, full of shit. That caveat aside, I don’t think it is necessarily wrong to ask how DH can have unique and important areas of investigation. What makes a question fundamental in the Humanities seems to be its ability to postulate an answer which transcends an individual historical moment. If an idea must be historically contingent, then it is preferable to locate its moment of origin and argue that the following period was fundamentally (there’s that word again) different from what came before as a result of this revolutionary new development.
That last paragraph was perhaps overly cynical, but I’ll just blame Chaucer for that. I am less concerned to point out the flaws and hypocrisies in much scholarship in the Humanities than to point out the basic nature of its methods. The power of asking (and answering) the “fundamental questions” is largely rhetorical. The arguments presented in favour of one answer or another do not constitute proof in anything like the mathematical or scientific sense. Indeed, most Humanities scholars recoil at the possibility of proving singular answers to these questions. The success of an argument depends on its power to persuade, and its persuasive power, when not based on the verbal fireworks of the author, is derived from examples, not empirical evidence. Most of the time we gesture towards some sort of evidentiary standard for persuasion in our various sub-disciplines—and some fields like history and linguistics do so more than others. But this is frequently a result of a greater availability of data. In fields like literary criticism, as Franco Moretti has pointed out in Graphs, Maps, and Trees, a huger preponderance of evidence—like thousands of long-forgotten novels—tends to be elided or treated with very broad brushstrokes indeed. This is where the Digital Humanities potentially challenges the approaches of the Humanities as practiced for much of the past century. By translating texts (broadly defined) into data, it flirts with the idea of finding answers to our questions based on a greater whole that we can’t or refuse to access without the mediation (or even collaboration) of a computer. Moretti clearly shows how this approach leads to a new expansion of the canon, but there is a much more important question at stake. This greater whole can begin to look like empirical evidence. In expanding the information that forms the basis of our arguments, are we in some way formulating answers that are more evidentiary than rhetorical in their persuasiveness? And the corollary, I suppose is whether we risk formulating our questions based on the possibility of generating evidence that corresponds to scientific “proof”, ignoring the forms of theoretical or ideological categories that have dominated thought in the Humanities for so long? Is DH then a return to positivism (a term of abuse in the Humanities community)? This problem antagonistic towards proof often opens the door for a double standard to be applied to quantitative research in the Humanities. On the one hand, traditional scholars disown methods that claim to establish proof, but they are all too ready to dismiss quantitative methods precisely because they fail to do so. Clivaz’ characterisation of the frequent response of the traditional Humanities scholar is revealing: “OK… you build tools, you are able to deal with a huge amount of data, and what? What is the purpose? What do you get as new ideas, results?” Now who’s being cynical?
What do we need to create results that are compelling (and persuasive) to the rest of the Humanities community? That’s really a bigger question than I want to take on here, but I’ll take a stab at some of the issues that I’ve been thinking about and that have a bearing on my work. The traditional humanist is typically less interested in the process of getting results than in what you have to say at the end of the process. (I notice today that many Humanities scholars talk about their scholarly activity as “writing”, rather than “research”, and I think that this is due to more than just the pressure to publish.) This is another way in which the Digital Humanities challenges traditional ways of doing scholarship in the Humanities. Digital Humanities work requires a thorough (and time-consuming) understanding of how the “results” relate to the data. It thus emphasised method and process more than ideas and argument, as do other Humanities approaches. And this, of course, brings up the “hack v. yack” debate. (At the time of this writing, I think the best summary of this debate is on Adeline Koh’s blog post More Hack Less Yack?: Modularity, Theory and Habitus in the Digital Humanities, and the accompanying comments.) Whilst Stephen Ramsay (and Digital Humanities funding bodies less directly) have made a strong case that DH is about “building”, there are many who feel that ideas and debate are being sidelined. Ian Bogost’s post on OOO and Politics highlights how not foregrounding categories of identity (the dominant critical paradigm in my scholarly lifetime) is already threatening to many students of the the Humanities. It seems to me that placing method before these formulating these categories is equally problematic for many scholars considering the Digital Humanities; indeed, many critics of DH are particularly concerned with scholarly paradigms grounded in categories such as identity. The easiest solution for DH would be simply to say, “That’s what makes us distinctive,” and go off and build their own departments devoted to the understanding of data manipulation and analysis. Such a vision even emerged today from William Pannapacker’s article No DH, No Interview in the Chronicle of Higher Education. As Alex Reid points out, Pannapacker’s comparison of DH to theory highlights how they operate as “two competing methods, which might become complementary (and may be complementary in some scholars’ work) but are largely seen as incongruous at this point”. Although I wouldn’t advocate the intellectual, let alone institutional, separation of DH from the Humanities disciplines from which it is emerging, I do think that such a move would be successful in a very short period of time. DH has already proved its power to bring in funding, to generate opportunities, and to capture the public imagination. I think the critique of DH is often a response to this power, an attempt to assert control, and, yes, to re-assert the traditional scholarly paradigm. But this is not to say that the critics are all speaking out of their “fundement”. My ideal would be for them to embrace the DH emphasis on method whilst DH embraces their desire for a more reflective, argumentative, and ideological form of discourse.
The best way to do that is to cultivate a DH which makes room for people on both ends of theoretical-methodological spectrum and encourages its practitioners to move back and forth along it. I think that DH already has a useful tool for encouraging this movement: the exposure of process. This lengthy introduction to the largely method-oriented posts to follow thus serves as a justification for the scholarly exploration of methodology in a format that allows for reflection on and exposure of one’s theoretical assumptions. Given that my natural preference is for the “hack” over the “yack”, I’ll consider the latter something of a challenge to be taken on as I rework the posts. If I’m lucky, I’ll find some juicy material for those who like to chew on the triumvirate categories of race, gender, and class. But those are not my primary interests at the moment (I have worked on them more in the past), so they may or may not come up. The “big” or “fundamental” questions I want to explore concern how information relates to meaning and how this relationship constructs interpretive practice. Lexomics and the related technologies I’ll be employing generate patterns from texts, patterns that can be expressed as visualisations. The patterns are produced largely by algorithms that are sensitive to statistical frequency, not textual meaning, yet they appear to correlate in some cases with patterns found by non-digital means. As a DH scholar, I make a leap of faith that some of the patterns reveal meanings that would have been missed by non-digital analysis or can only be captured with such analysis. But some patterns are meaningless. As a result, we need to devote considerable attention to how we discriminate between the meaningless and the meaningful. And then we need to figure out how to understand the meaning of what remains.
So let me begin with some basic propositions/assumptions. There is now a fairly large body of work showing that at some level word frequencies create stylistic fingerprints. Most of the research on this has been directed to authorship attribution, so we have less knowledge of how to detect the greater variety of discursive practices from word frequencies. Most of the research has also been on Modern English, so we don’t have a particularly good understanding of how word frequencies relate to grammatical structure, orthographic practices, and the like in a wider variety of linguistic situations. What do word frequencies mean when the definition of what constitutes a “word” itself is uncertain? On the other end of the scholarly workflow, word frequency patterns are often interpreted through graphs or visualisations. In the case of lexomics, word frequency patterns are sorted into pretty dendrograms which somehow reveal to us similarity relationships between texts and parts of texts. But why do some algorithms yield more meaningful results than others? What types of similarity are represented in the graph (and to what extent are we obligated to find out)? Lexomics (and similar approaches) unlinks language from its context—a problem for many scholars of the materialist bent, myself included. If there is a way to factor context back what would that mean for our understanding of the materials we study? Does working with only words and numbers mean that the fingerprints we detect are really just digital “ghosts”, haunting the texts from which they are extracted but without a way to engage with the material world?
That’s a suitably metaphysical note on which to end this rambling introduction to the methodological questions I’ll be exploring. With any luck, the much less more focused explorations in subsequent posts will at least take a stab at exploring these issues, even as they tackle the practicalities of doing DH using quantitative methods.