Panel 2: Grounded Theory in Translation Studies

Grounded Theory is an inductive theory discovery methodology which is based on a continuous interplay between data collection and data analysis. Its specific approach to theory development is based on a re-iterating cycle of "coding steps" which starts with the analysis of empirical data (as opposed to deploying a pre-existing theory) and which ends with an integrated theoretical framework grounded in the data. The intermediate steps may be described as follows:

  • Simultaneous collection and analysis of data

  • Creation of analytic codes and concepts from inspection of the data

  • Discovery of the basic processes that created the data

  • Inductive construction of abstractions and categories

  • Theoretical sampling to refine categories

  • Writing of analytical memos as a step towards a grounded theory

  • The integration of categories into a theoretical framework

The panel calls for contributions which describe any or all of the coding steps that highlight how codes, concepts, categories and theories emerge from data in the context of translation studies. The panel is open to presentations making use of all kinds of data sources: video, written, spoken or interpreting data, monolingual or multilingual, and all kinds of data acquisition devices which allow for the construction of Grounded Translation Theory from textual data, behavioral or brain activity data, in-depth interviews, man-machine or social interaction, or others.

For informal enquiries: mc.ibc@cbs.dk

Accepted Abstracts of Panel 02:

Data Collection of Visual and Brain Activity: A Combined Method for Analyzing Eye Tracking and fMRI Data in the Context of Translation Studies

Karina Szpak (Universidade Federal de Minas Gerais)

Abstract:

Human neuroimaging is a growing field with many publications per year. The field has grown around the acquisition and analysis of functional Magnetic Resonance Imaging (fMRI) data. The increasing ease of access to this technology has resulted in new ground being broken in research on the cognitive aspects of translation (Chang, 2009; Moser-Mercer, 2010; Sturm 2016). As powerful as fMRI is, it cannot address the causal role of a particular brain region in a particular task, therefore correlations between neuroimaging and behavioral methods are necessary (Mather, Cacioppo & Kanwisher, 2013). To address this issue, we provide examples of our own study into the nature of translation processing, which proposes a methodological integration between the analysis of eye tracking and fMRI data. First, we discuss the analysis of fMRI data, from the acquisition of the raw data to its use in locating brain activity. Then, we discuss the analysis of eye tracking data, from the acquisition of the raw data to its use in computing gaze activity. Finally, we discuss the crucial role that the software for data acquisition (E-Prime) plays in synchronizing both the scanning and the eye tracking sessions with the behavioral task. We conclude this presentation by addressing the limitations of this proposal, the future directions in its development, its relationship to other neuroimaging techniques, and the role of functional neuroimaging in translation process research.

Keywords:

Data analysis

fMRI

Eyetracking

E-Prime

‘Monitoring’ in Translation: The Role of Visual Feedback

Silvia Hansen-Schirra (Johannes Gutenberg-Universität Mainz), Moritz Schaeffer (Johannes Gutenberg-Universität Mainz), and Sandra Louise Halverson (Norwegian University of Applied Sciences)

Presented by: Moritz Schaeffer

Abstract:

One construct currently experiencing a revival within TPR is the notion of ‘monitoring’, or ‘a monitor’ as mechanism of mental control (e.g. Tirkkonen-Condit 2005; Schaeffer and Carl 2013). This paper queries the theoretical content of this construct by considering it relative to a model of working memory in writing (Chenoweth and Hayes 2003) and to work within bilingualism studies (de Groot 2011:326ff).

A crucial element of monitoring in translation is the visual feedback available on the computer screen. This characteristic is exploited in an exploratory study of monitoring activity. The study utilizes two conditions, with and without visual feedback from the typed target text, in order to try to identify some of the characteristics of monitoring. The effect of the visibility of the target on both the behaviour (eye movements, keystrokes and reaction times) and the product will be investigated. Previous studies show that visual feedback has an effect on low level execution processes, but not high level processes such as formulation (Olive and Piolat 2002), that task time (original writing) is significantly reduced while inter-key-press latencies are inhibited significantly, but only slightly (Torrance et al 2016). As monitoring in translation also involves cross-linguistic assessments, it is not expected that the same effects will be found. The empirical results will be fed back into existing theoretical models to explain control mechanisms during translation.

Keywords:

Monitor model

Mental control

Visual feedback

Keylogging

Eyetracking

Identifying Problems in Translation Process Data – from Empirical Analyses to a Theoretical Model

Jean Nitzke (Johannes Gutenberg-Universität Mainz)

Abstract:

Translation and post-editing can often be categorised as problem-solving activities. When the translation of a source text unit is not obvious to the translator at first sight, or in other words, when there is a hurdle between the source item and the target item, the translation process can be considered problematic. On the other hand, when there is no hurdle between the source and target text, the translator rather solves a task and not a problem (adapted from Dörner 1987). In recent studies, think aloud protocols have been used to identify problems in translation sessions, e.g. Krings (2001) or Kubiak (2009). However, think aloud protocols have many disadvantages, including that the use of the method changes the translation process (Jakobson 2003) and that translators cannot completely reproduce what is going on in their mind (Jääskeläinen 2010).

This talk will present a model which suggests how to identify problems on a word level with the help of keylogging and eyetracking data. I analysed the data of 24 translators (twelve professionals and twelve semi-professionals) who produced translations from scratch from English into German, and post-edited MT output for this study, which is part of the CRITT TPR-DB database (Carl et. al. 2016). In this data frame, I modelled with the help of a regression analysis for each PoS which parameters of a series of keylogging parameters, namely Munit, InEff, HTra, and HCross (cf. ibid.), can contribute to identify problems by triangulating eyetracking data and production time.

Keywords:

Problem solving

Translation process research

Keylogging

Eyetracking

Post-editing

Process Studies and Post-editing Training: Investigating English-Chinese Post-editing Process in the Classroom

Yanfang Jia (Hunan University), Xiangling Wang (Hunan University), and Michael Carl (Renmin University of China)

Presented by: Yanfang Jia

Abstract:

This study presents a series of translation process experiments carried out in a Master of Translation and Interpreting (MTI) course in China. The research was designed to facilitate the PE training purpose and to empirically test the impact of task types, text types, translation briefs, and post-editing (PE) guidelines on temporal, technical and cognitive efforts in the English-Chinese PE and human translation (HT) processes. The three efforts were gauged by production time per word, insertions, and deletions of keystrokes, and pause to word ratio (PWR) with a pause threshold of 1000ms, respectively. Thirty-one MTI students were assigned to translate/post-edit six texts according to specific translation briefs, which called for good enough quality for internal or publishable quality for external disseminations. Light and full TAUS PE guidelines were provided for corresponding PE tasks. Triangulated data from keystroke logging, screen recording, questionnaires, subjects’ guided interviews and written protocols were used for analysis. The preliminary results suggest that a) post-editing significantly reduces temporal, technical and cognitive efforts compared to translating manually; b) light post-editing takes more time than full post-editing; c) the students found PE different from human translation in many ways. They also reported various challenges in the post-editing process, mainly due to the influence of their previous translation training, lack of experience in PE and the ambiguous wording of the guidelines. The results also indicate that the direct use of process studies in the classroom can be effective both for producing quantitatively valid research findings and fulfilling pedagogical functions.

Keywords:

Post-editing Effort

Post-editing Guidelines

Text type

Post-editing Training

Translation process

MTI students

The Revision Phase under the Spotlight: Fast Drafting, Long Revision?

Anke Tardel, Moritz Schaeffer and Sivlia Hansen-Schirra (Johannes Gutenberg-Universität Mainz)

Presented by: Anke Tardel

Abstract:

Research on revision during translation continues to move into the focus of TPR (Mossop/Künzli 2014). To date, there is very little knowledge about the behavior during this phase. The CRITT Translation Process Research Data Base (TPR-DB) (Carl et al. 2016) contains a lot of keylogging and eye tracking data from multilingual translation studies presented in tables featuring measures such as durations and timestamps that describe the translator’s behavior during translation. We used the timestamps to calculate the absolute and relative durations for the orientation, drafting and revision phase (Jakobson 2002). In this study, we look at a multilingual subset of six studies from the TPR-DB to investigate the effect of keyboard and gaze activity during the drafting phase on the revision phase. Further, we investigate the effect of this behavior in relation to the behavior during the revision phase on the quality (Mertin 2006) of the final target text. We find that participants with longer average pauses between continuous text production and more concurrent target text reading during the drafting phase tend to have shorter relative revision phases while non-sequential typing during drafting results in longer relative revision phases. We also find shorter relative revision phases with more relative source and target text reading during drafting. We find fewer deletions in the revision phase the longer the relative drafting phase is, and the more deletions and concurrent target text reading and typing occur during drafting. These results will be discussed in relation to current models of the translation process.

Keywords:

Translation process research

Translation revision

Grounded theory

Eye tracking

Keylogging

Intermediate Versions in the Translation of Popular Scientific Texts

Arndt Heilmann and Stella Neumann (RWTH Aachen University)

Abstract:

Translation involves monitoring of the produced text and this may lead to early or late revision in the genesis of a translated text (Tirkkonen-Condit 2005). These revisions can be related to typos, but also to functionally relevant changes such as choosing a different word or wording for the translation of a stretch of text. Alves and Couto Vale (2011) showed that it is possible to identify different revision profiles based on macro and microunits of translations. We assume that an analysis of intermediate versions can also benefit from taking behavioural information as well as linguistic information of the source text, target text and intermediate text(s) into account. In combination with the assessment of behavioural measures and linguistic functions related this procedure will allow us to construct categories to describe different types of intermediate version in the translation process. These categories are going to be based on reading and typing behaviour related to the production and deletion of the intermediate version and the intermediate version’s linguistic function. To his end, source, target and intermediate texts are part-of-speech annotated with TreeTagger (Schmid 1995) and manually enriched with functional linguistic categories from the Cardiff Grammar (Fawcett 2007). Creating bottom up categories derived from triangulated multiple data sources may help describe the revision process in translation more aptly than looking at either data source in isolation. When comparing the bottom up categories to the final version of the text, it becomes possible to generate hypotheses about the decision process of a translator.

Keywords:

Intermediate Versions

Eyetracking

Keystroke Logging

Translation Process Studies

Searching for Deverbalization: Can Neuroimaging Provide Physiological Evidence of Deeper Processing?

Masaru Yamada and Shoko Toyokura (Kansai University)

Abstract:

Translation is theorized to involve deverbalization, through which the translator works beyond finding word-for-word equivalencies (transcoding) to grasp the meaning of a source text in context. This activity is also referred to as deep processing and allows translators to convey the underlying messages of source texts accurately. Although this model allows translation students and practitioners to conceptualize the translation process, it has been criticized because of the lack of any physiological evidence of deverbalization.

This research attempts to visualize deverbalization during translation by drawing on the neuroimaging technology NIRS (near-infrared spectroscopy) to capture translators’ brain activity. Based on previous studies (Sakai, 2005) showing that both L1 and L2 are processed in the same area of the brain, near Broca’s area (the grammar center) in the left hemisphere, the authors investigate the brain activity of translators coping with translation difficulties that normally require high cognitive effort. It is assumed that when deverbalization or deep processing is triggered, areas in the brain other than (or in addition to) the grammar center are activated and can be observed through NIRS. This study confirms that the application of neuroimaging in translation process research is a valuable method for understanding the details of our cognition, particularly in making connections between high cognitive effort activities and brain functions.

Keywords:

Deverbalization

Translation process research

Neuroimaging

Recognition and Characterization of Translator Expertise using Motor and Perceptual Activities

Pascual Martínez-Gómez (Tokyo Institute of Technology)

Abstract:

The process of translating is a complex human activity that seems to resist modern techniques of fine-grained modeling. Our grand objective is to reach greater levels of understanding of the translation process and ultimately build interfaces and predictive models that ease the translation task. In this talk we suggest a framework to construct such a model and describe how we can also use it to identify characteristic behavioral patterns of translator expertise. In this framework, we first hypothesize a causal relationship between the expertise of a translator and her behavioral patterns. Then, we automatically construct a function that quantifies such a relationship using TPR-DB, a database that records motor and perceptual activities of translators with different levels of expertise. Using this function, we discovered that expert translators spent larger proportions of time in the concurrent activities of reading the source text and typing the target text, when compared to non-expert translators. Thus, we conjecture that favoring these concurrent activities when designing user interfaces of computer-assisted translation systems may ease the translation process. In our experiments, we also found that perceptual activities (as measured by an eye-tracker) greatly contribute to the characterization of translator expertise and thus we recommend its use in similar studies. In this talk we will describe in detail our framework and discuss the challenges of jointly analyzing motor (keystrokes) and perceptual (reading) activities to gain deeper understanding of the translation process.

Keywords:

Translation Process Research

Eyetracking

Predictive Models

Towards a Finer-Grained Classification of Translation Styles Based on Eye-Tracking, Key-Logging and RTP Data

Jia Feng and Michael Carl (Renmin University of China)

Abstract:

This research endeavors to reach a finer-grained classification of translation styles based on observations of Translation Progression Graphs that integrate translation process data and translation product data. Translation styles are first coded based on the findings and classification of Jakobsen (2002), Carl et al. (2011) and Dragsted & Carl (2013). The qualitative observations from TPGs are further triangulated with observations of quantitative data from the TPRDB tables (c.f. Carl et al. 2016). Findings are again triangulated with translators’ cued retrospective protocol data which also help explain our findings.

Eye-tracking and keystroke logging data have been collected from 43 postgraduate Chinese students translating 2 texts from English into Chinese and 2 texts from Chinese to English, with 2 levels of source text difficulty in each translation direction. No time limit is set for the translation tasks. Each translation task is immediately followed by a retrospective protocol with the eye-tracking replay as the cue. We are also interested to see whether translation directionality and source text difficulty would have an impact on translation styles.

We try to explore 1) the translation styles in terms of different ways of allocating attention to the three phases of translation process, 2) the translation styles in the orientation phase, 3) the translation styles in the drafting phase, with a special focus on online-planning, backtracking, online-revision, as well as the distribution of attention to ST and TT respectively, 4) the translation styles in the revision & monition phase with a special focus on end-revision.

Keywords:

Translation styles

Translation Progression Graphs

Eye-tracking

Keystroke logging

Cued Retrospective Protocols

Panel Organizers:

Michael Carl (Renmin University of China)

Elisabet Tiselius (Stockholm University)

Silvia Hansen-Schirra (Johannes Gutenberg University Mainz)

Moritz Schaeffer (Copenhagen Business School)

Bio-note of Panel Organizers:

Michael Carl is Professor at the MTI Education Center of the School of Foreign Languages at Renmin University of China and Director of the Center of Research and Innovation in Translation and Translation Technology (CRITT) at the Copenhagen Business School/Denmark. His current research interest is related to the investigation of human translation processes and interactive machine translation. He has also been working on machine translation, terminology tools, and the implementation of natural language processing software. Dr. Carl has organized numerous workshops, scientific meetings and panels on translation and translation process related topics and published widely in this field of research.

Elisabet Tiselius is Director of Studies for Interpreting at the Institute for Interpreting and Translation Studies, Department of Swedish Language and Multilingualism at Stockholm University. Elisabet’s research interests are cognitive processes in interpreting, interpreters’ and translators' development of competence and expertise, deliberate practice in interpreting as a carachteristic of expertise, and child language brokering.

Silvia Hansen-Schirra is Professor of English Linguistics and Translation Studies at Johannes Gutenberg University Mainz in Germersheim, Germany. Her main research interests include specialized communication, text comprehensibility, post-editing, translation process and competence research. As fellow of the Gutenberg Research College she is the Director of the Translation & Cognition (TRA&CO) Center in Germersheim and co-editor of the online book series Translation and Multilingual Natural Language Processing.

Moritz Schaeffer has received his PhD from the University of Leicester and he has since worked as Research Associate at the Center of Research and Innovation in Translation and Translation Technology (CRITT), Copenhagen Business School, and at the Institute for Language, Cognition and Computation, University of Edinburgh. He is now Research Associate at the TRACO-Lab of the Johannes Gutenberg University Mainz.