What is Digital Humanities?

Digital humanities is the application of computation to those areas we traditionally think of as humanities subjects. As a growing field of study it is interesting to see how the field is represented in terms of the main texts, in this analysis monographs and edited books, as reflected in the Amazon book database. This type of approach is not without flaws, of course. As a limited study of the field we are not surveying journals, nor are we examining the outputs of digital humanities projects as reflected in terms of what is sometimes called "born-digital" content, such as archives, TEI projects, websites, tools and code archives. Indeed, this survey should certainly be treated with care when being read as there are idiosyncrasies generated as digital artefacts, both due to the limitations imposed on us in terms of time, data requests through the API, technician support, and the algorithm imposed through the Amazon API in terms of record selection. Nonetheless, even with these caveats its is notable that the results, broadly speaking, do reflect clusters of what we might think of as fields of study, and the connection between them. 

This site is the result of a Datasprint that attempts to map the digital humanities using the Amazon API to source data that allows us to show the relationships between different titles using the SimilarityLookup feature, where "similarity is a measurement of similar items purchased, that is, customers who bought X also bought Y and Z." One has to take this data at face value, of course, as there are limitations in the way in which the API returns data (e.g. ten recommendations) but also due to the networked nature of the requests. To make the data easier to read and understand it has also been filtered to take out the least referenced texts (filter level 2) after being run through the Gephi visualisation software. 
 


DIGITAL HUMANITIES FIRST PASS NETWORK AND COGNATE AREAS - Image CC-BY-SA (click to zoom)

OVERVIEW

Digital Humanities is, broadly speaking, the application of computation to the disciplines of the humanities. But Digital Humanities is, and remains, a contested term. For example, the Digital Humanities have been various termed the “Next Big Thing”, “young”, “energetic”, “theological”, “political”, “alt-Ac”, “transformatory”, etc. For Stanley Fish, DH “is theological because it promises to liberate us from the confines of the linear, temporal medium in the context of which knowledge is discrete, partial and situated." He also argues that “digital humanists love to be surprised”. In attempting to explore the field of digital humanities, therefore, we were keen to be "surprised", experimenting with digital methods, visualisation tools, graph editors, and so forth. 

In undertaking this research we were also interested in exploring the field of the digital humanities in relation to its (self) identification as a growing and vibrant field of study. In particular we wanted to explore how digital humanities is related to cognate fields, for example electronic literature and software studies, but also how the field has presently organised through a visualisation of its main printed literature. Of course, there is some irony in trying to use this approach to a subject that attempts to be not just agnostic on the publication outputs (e.g. print, digital, etc.) but also actively encourages non-traditional research outputs. 

One of the interesting outcomes of the practices involved in undertaking this form of digital method is the importance of Stephan Ramsay's "hermeneutics of screwing around". That is, the iterative process of adapting the viewing perspective, changing the data filters, editing colours, layout, depth, degree, and relative "importance" of particular nodes. Which is to say, using and adapting the way the data is presented in order to surface patterns and interesting features of the graph. 


METHOD

One of the distinct advantages of this type of research approach is that it is a wonderful method for the generation of data sets in relation to the research questions. This is also its disadvantage as we were soon offered not necessarily unlimited amounts of data, but certainly large quantities of results, options, requests and so forth. Scoping both our data requests and the resultant analysis of the data soon became an important aspect of the research. 

The basic methodology used for the generation of the data is outlined in the "methods" section above, however, briefly summarised, we used ten seed books as a stepping point into the data and then used this to request ten recommendations and then drilled through these results to a level of three degrees. This generated in the order of a thousand titles for each of the subject areas of digital humanities and electronic literature. This initial data set then allowed us to both get to grips with the Gephi software, which allows sophisticated data visualisation, but also to construct the next data phase which we could scope in relation to the first data set. 

The second data phase we organised around a comparative approach in relation to data requests to "local" Amazon APIs: "ca", "cn", "de", "es",  "fr", "it", "jp", "co.uk", "us". To make the data set more manageable we limited the drill-down (or degree) to two levels, this generated in the order of six hundred books for each of the larger domains. We then used a more aggressive filtering process to try to surface more information, but present less complexity in the graph. This method of organisation and filtering was applied to each of the Amazon domains.