Talks & Presentations

  • Webinar with SMI, Paulina Burczynska and Michael Carl. 11. May 2017


Talk given at KSU, USA on Feb. 23, 2017 and HNU, China on April 26, 2017

Novel logging technologies provide a wide range of possibilities to collect behavioral data during human task execution. Eye-tracking and keystroke logging are two such technologies which produce a large amount of behavioral data that are suited to investigate the underlying cognitive processes during reading and writing. However, only a very limited number of metrics have been developed for fragmenting and measuring the stream of observable reading and writing data, suited for the analysis of the human translation process.In this talk I approach translation process research from a 'big data' point of view, where the observable traces of reading and writing activities are the main variables of investigation. I give an overview over available translation product and translation process metrics and point to research directions which has the potential to provide us with a more comprehensive understanding of the human cognitive processes during translation.


Shreve Lecture at KSU, on Feb. 21, 2017 and Talk March 26.2017 at JAITS Translation Technology Interest Group meeting

In this talk I assess the literal translation hypothesis from an empirical point of view. Following Halverson (2015:320) I distinguish between literal translation as patterns of "intertextual correspondence" and default translation as an "immediate production mode". I discuss a measure of translation literality which is based on the word order similarity and the amount of the semantic overlap of words in a source sentence and their translations. I show that translations from English into very different languages (Japanese and Hindi) are less literal than translations from English into more similar languages (Spanish, German, Danish). I then introduce and quantify two characteristics of default translations, immediacy (i.e. eye-key-span) and durability (i.e. number of revisions), and show that the durability of default translations correlates with the translations' literality scores.


  • Emotional intelligence as a key to enhance translators’ employability and performance, Caroline Lehr, Oct 26, 2015

An individual’s emotional intelligence defines skills that are essential in professional success and employability, such as self-motivation, stress regulation, adaptation to changing work environments and the management of relations with clients and other team members. In recent years, emotional intelligence is therefore increasingly considered to be important for human performance and behavior in organizational settings. Also, empirical evidence has been provided which suggests that emotional intelligence can be meaningfully improved through training. In this presentation, I will outline the concept of emotional intelligence and illustrate its importance, taking the example of the work of professional translators, indispensable communicators in international organizations and companies. Moreover, I will present a research project on the development of an emotional intelligence training program tailored to this field of activity.


  • Strategies activated in the process of written translation. Factors of translation competence Dagmara Plonska, March 4, 2015

My research project deals with translation competence, translation strategies and mental representation of the text being translated. The research concerns French-Polish translation and employs Translog as the primary tool. Firstly, I will introduce the theoretical background of the project based on the cognitive-communicative translation model by Hejwowski. In the second part of my presentation I will lay out the hypotheses of the project and the method I used. Finally, I will describe partial results of the project concerning translation strategies.


  • Strategies activated in the process of written translation - Factors of translation competence Dagmara Plonska, September 12, 2014


  • SEECAT: ASR & Eye-tracking Enabled Computer-Assisted Translation, Mercedes Garcia Martinez, (CRITT, IBC), June 12, 2014

Typing has traditionally been the only input method used by human translators working with computer-assisted translation (CAT) tools. However, speech is a natural communication channel for humans and, in principle, it should be faster and easier than typing from a keyboard. This contribution investigates the integration of automatic speech recognition (ASR) in a CAT workbench testing its real use by human translators while post-editing machine translation (MT) outputs. This paper also explores the use of MT combined with ASR in order to improve recognition accuracy in a workbench integrating eye-tracking functionalities to collect process-oriented information about translators’ performance.


  • Leveraging Big Data using Language Technologyfor Business Analytics, Dr Srinivas Bangalore, (AT&T Research Labs, USA), 29 January 2014

New technologies and new media are radically changing the way that a modern business communicates with customers, partners and (global) society. Both new communication technologies and new media are intricately pegged to and dependent on developments in language technologies - thus, for example, much of the current talk about big data actually calls for solutions based on a profound understanding of languages and how they can interact with technologies. This applies across managing social media, conducting market communication, generating data for sentiment analysis, harvesting consumer data and relying on translation technology to address multiple audiences in real time.Taking as his point of departure the notion that business enterprises are massive warehouses of language data, Dr Srinivas Bangalore will explore some of the ways in which information and communication technologies allow tracking of language data that originates as part of internal communications, branding, marketing, procurement and in customer care interactions. In his talk, Srinivas Bangalore will highlight some language technologies - speech recognition, language understanding, language translation and virtual agents - that have the potential to transform enterprises through deeper business analytics while concurrently enabling an enriched customer care experience.


  • The Effect of Post-Editing on Translation Strategies, Oliver Čulo and Jean Nitzke, (FTSK, Johannes Gutenberg-Universität Mainz) July 3, 2013

The term translation strategy refers to various methods or procedures applied by human translators in order to circumvent typical problems or avoid common errors in translation. Picking the right strategy at the right time is one of the many challenges in translation, and one would hope that, if liberated from the most basic translation task, i.e. without the need to start a translation from scratch but only revising an existing translation, translators could invest more into assuring that the right strategies were followed.

A special type of translation revision is the case of post-editing machine translation (MT) output. MT output is still quite error prone, and poses very specific problems, as it sometimes may “hit the nail on the head”, but in other cases may completely fail the translation even of a simple word.

The pilot study presented here investigates how the challenge of revising MT output interferes with translation strategies. It involved 12 professional translators and 12 translation students all working from English (L2) into German (L1), translating or post-editing a number of texts according to a permutation scheme. Post-edited and human-translated texts were compared and analysed for possible interferences occurring in the post-editing task. Some individual cases are presented in the talk and indications for future research given


  • Integrating Automatic Speech Recognition and Translation Processes, CRITT seminar on November 28, 2012

Translation Process Research at CRITT has mainly been concerned with written language translation. However, a recent project has also studied the prospects of spoken translations, and the newly created DanCAST center at IBC/CBS investigates the development and application of speech solutions (ASR, speech synthesis, speech interfacing in dialogue systems) in real-world scenarios. To advance this branch of research and to combine translation process research with speech translation technology, CRITT has invited two well-known researchers:

Srinivas Bangalore (AT&T, USA), specialist in speech-to-speech translation and Fabio Alves (UFMG, Brazil), known through his work in translation process research, to discuss human and machine translation processes in possible and realistic multi-modal translation scenarios.

    • 14:00 - 14:20 Introductory Note (Arnt Lykke Jakobsen)

    • 14:20 - 15:00 Opportunities and Challenges in Speech Translation (Srinivas Bangalore)

    • 15:00 - 15:40 Speech recognition in the medical domain - current and projected developments (Peter Juel Henrichsen and Andreas Søeborg Kirkedal)

    • 15:40 - 16:00 Coffee Break

    • 16:00 - 16:40 Speech and Eye-gaze Enabled Computer Assisted Translation (Michael Carl)

    • 16:40 - 17:20 The Human-machine Interface in Translation from a Process-oriented Perspective (Fabio Alves)

    • 17:20 - 17:30 Wrap up


  • Innovations and experiments on Machine Translation, Heshaam Faili, University of Tehran, August 10, 2012

Professor Heshaam Faili reports on some innovations and experiments on Machine Translation System between English and Persian (formal language of Iran) in both rule-based and statistical approaches, automatically language-free WordNet construction, automatically parallel corpus generations, an experiment on bridging between a hand-crafted and statistically trained English grammar and its usage in MT. The session will be more in-depth on MT and WordNet subjects.


I would like to present my recent efforts on converting the Copenhagen Dependency Treebank (CDT) data into Treex. CDT is a multilingual treebank developed at CBS. Treex is a multi-purpose Natural Language Processing software framework developed at Charles University in Prague. Treex is used in a number of projects aimed at building language data resources as well as at developing NLP applications such as Machine Translation systems. The talk will have two parts. In the first part I will give an overview of the conversion procedure, with focus on pecularities of CDT that made the conversion a challenging task. In the second part I will show how the CDT data in the new format can be browsed and further processed.


  • A study on concordancing: EU policy areas as translation problems, Paola Valli, April 27, 2011



  • Linear Unit Grammar, Anna Mauranen, Dean of the Humanities Faculty of the University of Helsinki,7 May 2010


  • What’s going on in English? Explorations into the global Lingua Franca, Anna Mauranen, University of Helsinki, March 15, 2010

This talk will address these questions in the light of two kinds of evidence: corpus data and discourse analysis. Typical features in English as a lingua franca (ELF) grammar, lexis and phraseology are illustrated from a database of a million words of academic talk (the ELFA corpus). Some developments parallel those found in standard and non-standard varieties of English, while others appear specific to ELF. To understand how speakers manage to communicate effectively in environments where a broad range of non-standard forms and cultural backgrounds come together, it is useful to explore salient communicative practices in ELF discourse. ELF communities of practice differ in many respects from traditional speech communities, but like any communities, they regulate their speech norms to achieve communication and to avoid misunderstanding. Fundamental aspects of discourse get acted out with creative employment of shared language resources.