* Peter Bloom (Swansea University)
Computing Fantasies: Psychologically Approaching Identity and Ideology in the Computational Age: Popularly and scholarly, the new millennium is believed to have ushered in a new epoch centred on the seemingly ubiquitous presence of computers, and ethics of the computational, in all spheres of social and personal life. This “computational turn” necessitates a re-thinking of established concepts of identity and ideological dominance. More precisely, how has this shift altered actor’s desires, therefore sense of self, and what does this reveal about present forms of critical hegemony? This work deploys the Lacanian psychoanalytic concept of fantasy for understanding this novel “computational subject”, investigating the shared utopian visions structuring contemporary aspirations and psychologically stabilizing identity. Specifically, it isolates two emerging “computational fantasies” – the first, “computational man”, revolving around desires to gain personal and social wholeness through continually improving our computational ability and the second, “computational humanity”, linked to the promise that computing technology will enhance our ability to enjoy uniquely “human” needs of emotional health. These fantasies represent, despite differences, the evolving legitimisation of capitalism ideologically, and its associated values such as consumerism and privatisation, in the modern era.
* Annamaria Carusi (University of Oxford) email@example.com:
Technologies Of Representation: Images, Visualisations And Texts: As technologies for visually rendering data of all kinds continue to develop in their capacity and power at an impressive rate, the research and knowledge domains in which they are deployed are fundamentally challenged at the very same time – and by the very same token – as they are been supported and facilitated. Research and development in this area is informed by a wide variety of disciplines. This paper, instead, brings to the fore humanities approaches to imaging and visualization. The humanities have a rich and deep tradition of understandings of representation and interpretive techniques to make sense of them. With some notable exceptions, this tradition has been under-represented in the general discourse and conceptual framework informing the development and deployment of imaging and visualization technologies. Using a humanities approach brings about a more reflexive understanding of the tools used for conducting research, paramount for the humanities, but also, I shall argue, for other disciplines. In addition, it offers a route out of narrowly cognitivist and representationalist conceptions of imaging and visualization, offering instead a richer account of the inter-relationship between acts of interpretation, modality, and the ontological commitments made by researchers in the process of their research. The position piece outlines some of the questions that need to be put in order to understand the role of technological mediation in images, visualisations and texts; outlines a broadly phenomenological approach, and points to research undertaken with philosophical texts on one hand and with scientific visualisations on the other.
* Joaquim Ramos de Carvalho (University of Coimbra) firstname.lastname@example.org:
Self-Organization, Zipf Laws And Historical Processes: Three Case Studies Of Computer Assisted Historical Research: This paper presents three case studies where intensive usage of computer based techniques contributes significantly to the understanding of historical processes. Each of the case studies detects evidence of a self-organizational historical process, i.e., a process where a complex structure emerges without the intervention of a centralized coordinating entity, but rather as the result of local interactions of agents. In all three cases the emergent nature of the process shows itself in the form of a Zipf law (sometimes called a power-law), a particular mathematical distribution that connects relevance, or size, and frequency. In all cases the computer-based analysis provides not only quantitative information but also striking visualizations. The three case studies are: settlement patterns, mail routes and spiritual kin networks.
* Tom Cheesman (Swansea University)
Is What Computation Counts What Counts?: As a literary translator, editor, critic, and cultural historian, I'm sceptical about the computability of most of the problems I think are interesting. Obviously digitization and computation make helpful tools. For instance, when I’m working on a large set of redactions of ‘a’ text and I want to be able to see quickly what the differences and similarities are: I imagine that if I had the right software package (and I don’t) this could become far easier to do systematically and thoroughly, rather than relying on my own memory and sense of pattern. Beyond that, as far as I can see, available tools for editing, analysing, and mining texts don't do what I want to do. Recently I’ve been working on a set of about 30 different translations (re-translations) of Othello, in German, dating from the 1760s to the present. My basic working idea is that the divergent re-translations encapsulate a micro-history of German notions of race difference. I’ve done a pilot study, taking a sample of two lines involving terms freighted with ideological values, used in deliberately ambiguous ways by Shakespeare, and (at a different level) by the character who speaks them, who represents the State in the play. This is the Duke of Venice (to Desdemona’s father, Brabantio, in Othello’s presence): “If virtue no delighted beauty lack, / Your son-in-law is far more fair than black.” Within the set of 30-odd German re-translations I’ve collected, there are significant patterns in the differences between versions of these lines. It turns out (not to my surprise) that subsets of the re-translations can be grouped because they share specific lexical and syntactic and (therefore) semantic features, and, lo and behold, these subsets quite neatly correspond to distinct historical periods in German political history. Translators re-translated ‘race in the voice of the State’ in these lines differently before and after 1871, before and after 1918, before and after 1945, before and after 1990, and before and after 2000. These are all turning points for the history of (the) German state(s) and for the history of German cultural identifications in terms of race, ethnicity, the nation, and its ‘others’. (In 2000, a new citizenship law came into force, breaking the formal link between German ethnicity and citizenship for the first time since 1913. My starting-point was a politically and aesthetically radical ‘tradaptation’ of Othello in 2003, co-authored by a Turkish German writer.) The pilot study was conducted without the aid of computation, except that I did create a Word file with the set of redactions of the two lines, and searched it for words (or groups of letters, as a 'home-made' substitute for marked-up lemmata) to back up my own memory. That way I did find one or two parallels (recurrences) which I might otherwise have overlooked. A full text study would be the next step; steps beyond that would be full text studies of race/nation in further Shakespeare plays, in German and why not in further languages. On that scale, computational tools would become necessary, simply in order to manage the sheer size of the dataset: one would want to align all the re-translations and be able to have all versions of a given passage on screen at once. But would it even be worth marking up these texts fully? Gross searchability and low-level pattern recognition are offered by crude digitalisation. Text analytics can identify recurrent or parallel lexical and syntactic features only to a very limited level (e.g. trigrams, in MONK). This sort of tool is no help in dealing with semantics at the levels I am interested in: the continuum of levels from readings (in the plural) implied or enabled by specific choices by individual re-translators, to the multiple, competing, historical and contemporary ideological discourses to which their choices implicitly, intentionally, or unintentionally refer, and the implicit intertextual dialogue between re-translators, and re-editors, and re-producers, and publics... In short, no computer saves me the trouble of reading my primary texts, reading a lot of other primary texts which constitute their cultural contexts, and reading secondary texts which help me think about what I read, and so forming judgments about what what I am reading meant and what it means or what it can be made to mean, plausibly, in my critical argument. So, on the one hand I can imagine applying for funding to digitize large sets of translations of Shakespeare plays: say, Othello, The Merchant of Venice, The Tempest, in French, German, Italian, and Spanish, for a project called Shakespeare in European Race Hate. On the other hand, I suspect that the time spent scanning, correcting, formatting, aligning, and (above all) marking up the texts, and managing and supervising the people doing this, would be better spent simply reading, thinking, and writing.
* Tim Cole and Alberto Giordano (University of Bristol, Texas State University, San Marcos)
The Computational Turn: GIScience and the Holocaust: This paper is a brief progress report on ongoing interdisciplinary research that draws on the methodologies of GIScience to visualize and interpret the Holocaust. The focus is on a component project of a broader research project that examines the potential of a historical GIS of the Budapest ghetto to generate new research findings and questions.
* Morgan Currie (University of Amsterdam) email@example.com:
The Feminist Critique: Mapping Controversy on Wikipedia: Research on Wikipedia often compares its articles to print references such as the Encyclopedia Britannica, a resource historically associated with depoliticized content, neutrality, and the desire to catalogue the external world objectively. But Wikipedia, the free-content, openly editable, online encyclopedia, evolves out of a process whereby multiple perspectives, motives, compromises, protocols, and software determine the present version of an article. Using controversy as an epistemological device, can we explore Wikipedia to map editors’ concerns around an issue? Can we observe how Wikipedia manages or defuses controversy within an article or across a wider ecology of related links? To begin one should ask how to pinpoint controversy and its resolution on a site that is updated several times every second. What methods can we use to trace a dispute through to its (temporary) resolution and to observe the changes in an article’s content, form, and wider linked ecology? This paper starts by examining past research on the accuracy of articles on Wikipedia, at the expense of exploring Wikipedia as a technically mediated process. Instead, one can argue that controversy and discussion are critical for an article’s development in a participatory platform. Next the paper works towards a definition of controversy using Actor Network Theory and applies this specifically to Wikipedia’s own protocols and technologies for consensus editing. Finally this paper provides methods for isolating and visualizing instances of controversy in order to assess its role both in a network and an article’s editing history; it will investigate ways to track and graphically display controversy and resolution using the Feminism article and its larger network as a case study. This research reveals the complex dynamism of Wikipedia and the need for new analyses when determining the quality of its articles. Wikipedia is not a static tome but a collaborative process that unfolds on several scales and hierarchies. An article can evolve precisely from conflict between editors who disagree, and it may achieve its current state only after years of development. This paper proposes a variety of methods to map and visualize these dynamics.
* Scott Dexter (Brooklyn College of CUNY) firstname.lastname@example.org:
Toward a Poetics of Code: In this project, we focus on what software is and the nature of its making: its poetics. Thus, the locus of our study is not software-in-execution but software-in-creation, or source code. We suggest that an understanding of source code as an expression of an embodied aesthetic experience of production, one which is both literary and performative, may yield new paradigms for understanding not only software but also software-mediated creation of all genres.
* Dan Dixon (University of the West of England) email@example.com:
Analysis Tool or Design Methodology?
* Federica Frabetti (Oxford Brookes University) firstname.lastname@example.org
Have the Humanities Always Been Digital?: For an Understanding of the ‘Digital Humanities’ in the Context of Originary Technicity: This paper is situated at the margins of what has become known as ‘Digital Humanities’, i.e. a discipline that applies computational methods of investigation to literary texts. Its aim is to suggest a new, somewhat different take on the relationship between the humanities and digitality by putting forward the following proposition: if the Digital Humanities encompass the study of software, writing and code, then they need to critically investigate the role of digitality in constituting the very concepts of the ‘humanities’ and the human. In other words, I want to suggest that a deep understanding of the mutual co-constitution of technology and the human is needed as an essential part of any work undertaken within the Digital Humanities. I will draw on the concept of ‘originary technicity’ (Stiegler 1998, 2009; Derrida 1976, 1994; Beardsworth 1995, 1996; Critchley 2009) and on my own recent research into software as a form of writing - research that can be considered part of the (also emerging) field of Software Studies/Code Studies - to demonstrate how a deconstructive reading of software and code can shed light on the mutual co-constitution of the digital and the human. I will also investigate what consequences such a reading can have - not just for the ‘humanities’ and for media and cultural studies but also for the very concept of disciplinarity.
* Adam Ganz and Fionn Murtagh (Royal Holloway University of London)
From Data Mining in Digital Humanities to New Methods of Analysis of Narrative and Semantics: We apply advanced mathematical and computational methods (based on quantitative data analysis) to the analysis of narrative, to reveal its deep structure. The data analysis platform developed by the renowned geometric data analyst, Jean-Paul Benzécri, provides us with core methodological tools. These are augmented by the data analysis applications of Benzécri's work by the acclaimed social scientist, Pierre Bourdieu. We compare these results with the theoretical and methodological approaches of different humanities disciplines to devise tools for the analysis of the structure of narrative in a new and fundamentally cross disciplinary approach.
* Kevin Hayes and Marek Sredniawa (Warsaw University of Technology)
A Cultural Analytics Based Approach to Polymath Artists: Witkacy Case Study: Great polymath figures ask for multifaceted Cultural Analytics based approach and multidisciplinary collaboration of professionals to generate full view of their work and reveal complex relationships among different areas of their activities. Big Humanities concept of interdisciplinary web based collaboration seems also very appropriate for the case curing the syndrome of specialization in the humanities. It was decided to choose Polish artist Witkacy for the future case study because he is still neglected and not known in the global culture. There was also another important motivation. He still continues to be inspiration for the contemporary artists in many fields from painting to theatre and music. And still one can find influence of his concepts on modern art, e.g.his S.I.Witkiewicz Portrait Firm with its “Rules” is a forerunner of Warhol’s Factory. The key idea of Virtual Witkacy is to utilize state of the art of web applications and go beyond digital libraries, galleries and Wikipedia by adding semantic and social networking dimensions. It is envisioned to be a place where professionals can collaborate through discovering, publishing, and sharing their content but also an educational platform. Through semantic tags, shared content analysis, preference and choice tracking it can evolve collaborative intelligence. Thereby producing a transforming synergy effect in which the knowledge of all may be accessed and aggregated, semantically organized with an easily updated content. Social networking and social tagging can represent individual visions which reflect different perspectives. Launching Virtual Witkacy as an international collaborative praxis, process and project could contribute to better understanding of modernism and integrate the worldwide Witkatian community offering both value and fun.
* Adelheid Heftberger (Austrian Filmmuseum) email@example.com:
Film Data for Computer Analysis and Visualisation: In the three-years project Digital Formalism in Vienna three institutions have worked together on developing methods for the annotation of filmic data to study rhythmic structures and montage structures. Our filmic material consisted in the work by the russian film pioneer, documentarist and experimental filmmaker Dziga Vertov. Not only the films are stored in the Austrian Filmmuseum (one partner in the project), but also a big collection of non-filmic material was made available in an Online-Database. From a thourough annotation of the films according to the requirements of the researchers we came to first attempts to explore the data for visualisations in collaboration with Lev Manovich.
* Mireille Hildebrandt (Vrije Universiteit Brussel) firstname.lastname@example.org
The Meaning and Mining of Legal Texts: Positive law, inscribed in legal texts, entails an authority not inherent in literary texts, generating legal consequences that can have real effects on a person’s life and liberty. The interpretation of legal texts, necessarily a normative undertaking, resists the mechanical application of rules, though still requiring a measure of predictability, coherence with other relevant legal norms and compliance with constitutional safeguards. The present proliferation of legal texts on the internet (codes, statutes, judgments, treaties, doctrinal treatises) renders the selection of relevant texts and cases next to impossible. We may expect that systems to mine these texts to find arguments that support one’s case, as well as expert systems that support the decision-making process of courts, will end up doing much of the work. This raises the question of the difference between human interpretation and computational pattern-recognition and the issue of whether this difference makes a difference for the meaning of law. Possibly, data mining will produce patterns that disclose habits of the minds of judges and legislators that would have otherwise gone unnoticed (reinforcing the argument of the ‘legal realists’ at the beginning of the 20th century). Also, after the data analysis it will still be up to the judge to decide how to interpret the results or up to the prosecution which patterns to engage in the construction of evidence (requiring a hermeneutics of computational patterns instead of texts). My focus in this paper regards the fact that the mining process necessarily disambiguates the legal texts in order to transform them into a machine-readable data set, while the algorithms used for the analysis embody a strategy that will co-determine the outcome of the patterns. There seems a major due process concern here to the extent that these patterns are invisible for the naked human eye and will not be contestable in a court of law, due to their hidden complexity and computational nature. This position paper aims to explain what is at stake in the computational turn with regard to legal texts. This prepares for the question I want to put forward to those involved in distant reading and not-reading of texts: could a visualization of computational patterns constitute a new way of un-hiding the complexity involved, opening the results of computational ‘knowledge’ to citizens’ scrutiny?
* Yuk Hui (Goldsmiths, University of London) email@example.com:
Computational Turn or a New Weltbild: Abstract. This paper proposes to look at the computational turn from the perspective of world picture(Weltbild), which was picked up by Martin Heidegger in 1938. By arguing against Heidegger's favor of the ontological, and introducing Dijksterhuis's problematization of the ontological and the epistemological, and the end of the mechanized world picture at the turn of the 20th century, it proceeds by asking what does it mean to be a world picture today? It identifies the world picture in the computational turn, with the notion of the "discursive network", produced by the proliferation of networks and logical language since last century, and populated by the representation of cultural dynamic through network/data visualization in the current discussion. The discursive network shares the aesthetics of what Bourriaud called the "Altermodern", the successor of the Postmodern. This paper also attempts to look at what is the significance of this coincidence and bring Heidegger back into the discussion.
* Andrew Klobucar (New Jersey Institute of Technology) firstname.lastname@example.org:
All Your Database are Belong to Us”: Aesthetics, Knowledge and Information Management: For a growing number of Humanities researchers, advances in information and network technologies continue to inspire radically revisionary mandates to implement, restructure and expand on traditional pedagogies and academic methodologies in the liberal arts. The work of literary theorists like Frank Moretti, for example, provides an exemplary 21st century approach to narratology and literary criticism by introducing epistemological concepts more commonly associated with the fields of information management and knowledge representation in order to reconfigure how the novel might be usefully interpreted and assessed with respect to electronic modes of presentation, as opposed to print or analogue formats. For Moretti, visually and spatially oriented paradigms like maps, charts, graphs, etc. have become important critical tools within literary criticism now that contemporary electronic information networks have expanded to include an increasing variety of different forms of cultural production. The fact that Google Maps software currently allows literary and visual art concepts to be seamlessly incorporated into geographical frameworks invites, in other words, both a reconceptualisation of space and location as literary terms and a corresponding spatialisation of aesthetic concepts.While this increased cohesion between instrumental and aesthetic relationships to knowledge may instigate distinct disciplines of practice and learning, the aesthetic, i.e., symbolic, elements of contemporary communication technologies have advanced to the point where even the common use of cell phones seems to involve modes of cognitive interaction that extend far beyond traditional parameters of telephony. Stanford University Professor and software developer Ge Wang, in fact, considers the latest generation of mobile phones to have evolved into nothing less than a kind of “personal intimate device,” whose application might be accurately described as a complex, highly specialised, communal relationship. Technologies like GPS remain notable in these particular symbolic contexts for their increasingly sophisticated capacity to generate something akin to Kant’s notion of transcendental subjectivity – where our interactions begin to simulate the concurrent experience of the material world and corresponding projection of a kind of virtual self within a distinct ontological space of shared discourse.As this paper intends to demonstrate, both the aesthetic and ontological roots behind contemporary concepts of information technology bear a historically extended intellectual lineage within modern humanism back to Kant’s concept of objectivity as an active faculty of rational consciousness. In fact, the term “information” can itself be traced specifically to Thomas Aquinas's (1225-1274) coinage of the Latin word "informatio" as a way to theorise important connections between concepts of intellect (intellectus) and physical apprehension (sensus). Aquinas derives his epistemology primarily from Aristotelian metaphysics, yet his analysis of knowledge as a specific "in-forming" of matter by active principles of intellectual perception provides an important cultural foundation concerning the use of information-centred paradigms of modern knowledge. Historical theorisations of epistemological networks and the content informing them thus reveal a consistent concept of modern knowledge from the late medieval period through the enlightenment onward to contemporary post-industrial information-based economies. Beginning with Aquinas and moving through the inception of knowledge representation technologies in the late 18th century to current digital knowledge visualisation software, this paper will outline a very specific "aesthetics" of database management as a key component of both humanities-based pedagogy and modern thought.
* Yuwei Lin (University of Salford)
Text Mining for Frame Analysis of Media Content: I worked at the coordinating hub of the ESRC National Centre for e-Social Science (NCeSS) at the University of Manchester from 2006-9. During my career there, I participated in several development and implementation of e-Social Science research tools and web services, and also managed a JISC-funded cross-disciplinary collaborative project called “Text Mining for Frame Analysis” (TMFA). The paper is mainly based on my observation and reflection from the TMFA project. This position paper focuses on some methodological issues and challenges in such a computational turn in social sciences. When discussing those issues that have been raised in the call for papers such as pattern-matching vs. hermeneutic reading, and a statistical paradigm vs. the data mining paradigm, I feel it's important to talk about the work practices in some of the projects I was involved in, for example, how databases were constructed for carrying out text mining and data mining tasks.
* Bernhard Rieder and Theo Roehle (Laboratoire Paragraphe, Université de Paris VIII, Graduiertenkolleg Automatismen, Universität Paderborn) email@example.com, firstname.lastname@example.org:
Digital Methods: Five Challenges: Digital technology is set to change the way scholars work with their material, how they "see" it and interact with it. The question is how well the humanities are prepared for these transformations. If there truly is a paradigm shift on the horizon, we will have to dig deeper into the methodological assumptions that are folded into the new tools. We will need to uncover the concepts and models that have carried over from different disciplines into the programs we employ today (and tomorrow). In our paper, we offer a non-exhaustive list of issues that we believe will have to be addressed it we want to productively integrate the new methods, without surrendering control over the conceptual infrastructure of our work.
* Alkim Almila Akdag Salah (The Virtual Knowledge Studio for the Humanities and Social Sciences (VKS-KNAW))
Digital Problems/Digital Solutions: Digital humanities faces – at least – two major problems: First, humanities scholars need to work with scientists and programmers in order to execute computational methods, and this requires new types of collaborative environments. Second, there is a gap between the critical thinking of humanities research and the quantitative approach common to computational studies. Concerning the first point, unlike typical collaborations, where either research questions, means or methods are shared, the new collaborative environments work by churning the research questions of a humanities scholars through the skills of scholars coming from different research paradigms with different research aims. What is asked for is a new division of labor, which influences the research practices and work flows of humanities scholars, computer scientists and institutional frameworks supporting them. The second problem relates to the difference in the goals and epistemic traditions of these groups, which creates a major research hurdle. In order to be able to negotiate new forms of specialization and their re-integration into research agendas of the humanities, the above mentioned gap has to be bridged. One aspect of this task is the ability to reformulate abstract research questions in such a way that quantitative analysis could be useful. However, this requires insights into the potential of computational methods and quantitative analysis. Eventually, a new mixture of curricula is required to seamlessly integrate traditional and digital humanities. This paper empirically illustrates and conceptually discusses the struggle around digital humanities based on two case studies the author has been working on.
* Joris Van Zundert, Smiljana Antonijevic (Huygens Instituut - KNAW, Virtual Knowledge Studio, Oxford University - Faculty of History) email@example.com, firstname.lastname@example.org, email@example.com, Karina.van.Dalen@huygensinstituut.knaw.nl, firstname.lastname@example.org, email@example.com:
Cultures of Formalization - Towards an encounter between humanities and computing: The past three decades have seen several waves of interest in developing cross-overs between academic research and computing. The current efforts at developing computational humanities, and recent emphasis on virtual research environments (VREs) of which Alfalab (an initiative of the Royal Netherlands Academy of Arts and Sciences) can be regarded as an example. Efforts to introduce computational method typically involve collaborative work between scholars and engineers. In this paper we focus specifically on formalizations emerging as a result of such encounters. We argue that critical reflection on formalization practices is important for any computational program to succeed: by conceptualizing and describing cultures of formalization in the humanities, we can identify aspects of research that could be better supported, if suitable and compatible computing approaches are developed. An approach that stresses cultures of formalization can enrich the computing research agenda, and contribute to more symmetrical and constructive interactions between various stakeholders in computational humanities. Our exploration takes on three forms. We look more closely at formalization and question whether it is a singular concept. Second, we ask whether formalization is also an aspect of research in the humanities, even without (necessarily) thinking of it as driven by computation, and we present four case studies that help us explore that question. Finally, we consider how our analysis enriches what can be understood by formalization, and what kind of light it throws on the encounter between computing and humanities.
Clement, Tanya E. (2008) ‘A thing not beginning and not ending’: using digital tools to distant-read Gertrude Stein’s The Making of Americans. Literary and Linguistic Computing. 23.3 (2008): 361.
Clement, Tanya, Steger, Sara, Unsworth, John, Uszkalo, Kirsten (2008) How Not to Read a Million Books. Retrieved 10/11/09 from http://www3.isrl.illinois.edu/~unsworth/hownot2read.html
Council on Library and Information Resources and The National Endowment for the Humanities (2009) Working Together or Apart: Promoting the Next Generation of Digital Scholarship. Retrieved 10/11/09 from http://www.clir.org/pubs/reports/pub145/pub145.pdf
Hayles, N. Katherine (2009) RFID: Human Agency and Meaning in Information-Intensive Environments. Theory, Culture and Society 26.2/3 (2009): 1-24.
Hayles, N. Katherine (2009) How We Think: The Transforming Power of Digital Technologies. Retrieved 10/11/09 from http://hdl.handle.net/1853/27680
Kittler, Fredrich (1997) Literature, Media, Information Systems. London: Routledge.
Krakauer, David C. (2007) The Quest for Patterns in Meta-History. Santa Fe Institute Bulletin. Winter 2007. Retrieved 10/11/09 from http://www.intelros.ru/pdf/SFI_Bulletin/Quest.pdf
Latour, Bruno (2007) Reassembling the Social. London: Oxford University Press.
Manovich, Lev (2002) The Language of New Media. MIT Press.
Manovich, Lev (2007) White paper: Cultural Analytics: Analysis and Visualizations of Large Cultural Data Sets, May 2007. Retrieved 10/11/09 from http://softwarestudies.com/cultural_analytics/cultural_analytics_2008.doc
McLemee, Scott (2006) Literature to Infinity. Inside Higher Ed. Retrieved 10/11/09 from http://www.insidehighered.com/views/mclemee/mclemee193
Moretti, Franco (2005) Graphs, Maps, Trees: Abstract Models for a Literary History. London: Verso.
Robinson, Peter (2006) Electronic Textual Editing: The Canterbury Tales and other Medieval Texts. Electronic Textual Editing. Modern Language Association of America. Retrieved 10/11/09 from http://www.tei-c.org/About/Archive_new/ETE/Preview/robinson.xml
Schreibman, Susan, Siemens, Ray & Unsworth, John (2007) A Companion to Digital Humanities. London: WileyBlackwell.