Measuring the emotional impact of AD on blind and visually impaired (B/VIP) users: 

A case study

Department of Foreign Languages, Literatures and Cultures

Head of the research project:  Larissa D'Angelo, PhD

The present case study focuses on the emotional impact of AD on blind and visually impaired (B/VIP) users with the aim of furthering research in corpus-based, multimodal discourse analyses of film audio description (Díaz Cintas et al., 2007; Jimenez Hurtado & Soler Gallego, 2013; Jiménez Hurtado & Seibel, 2012) and AD reception studies (Chmiel & Mazur, 2012; Di Giovanni, 2013; Igareda & Maiche, 2009; Orero, 2008; Walczak & Szarkowszka, 2012). After devising a manually tagged multimodal corpus containing filmic material and its AD transcription, a selection of highly emotional film excerpts was shown to B/VIP respondents and to a control group comprising sighted respondents. Both groups were exposed to the same AV material and a computer-based facial expression analysis was carried out using iMotions Software, capable of capturing raw, unfiltered emotional responses towards emotionally engaging content. The electrodermal activity of both groups was also measured with a Shimmer GSR, a galvanic skin response device which validated the facial expression analysis. The results show the type and level of emotional response of respondents to specific linguistic and visual stimuli, providing useful inputs for researchers and professionals involved AD practice.

 

References:

 

Chmiel, A., & Mazur, I. (2012). AD reception research: Some methodological considerations. In E. Perego (Ed.), Emerging topics in translation: Audio description (pp. 57-80). Trieste: EUT

 

Díaz Cintas, J., Orero, P., & Remael, A. (Eds.) (2007). Media for All: Subtitling for the deaf, audio description and sign language. Amsterdam: Rodopi.

 

Di Giovanni, E. (2013). Visual and narrative priorities of the blind and non-blind: Eye tracking and audio description. Perspectives: Studies in Translatology, DOI: 10.1080/0907676X.2013.769610

 

Díaz Cintas, J., Matamala, A., & Neves, J. (2010). New Insights into Audiovisual Translation and media Accessibility. Media for All 2. Amsterdam: Rodopi.

 

Igareda, P., & Maiche, A. (2009). Audio description of emotions in films using eye tracking. In N. Barthouze, M. Gillies & A. Ayesh (Eds.), Proceedings of the symposium on mental states, emotions and their embodiment (pp, 20-23). London: SSAISB (The Society for the Study of Artificial Intelligenceand the Simulation of Behaviour).

 

Jiménez Hurtado, C., & Seibel, C. (2012). Multisemiotic and multimodal corpus analysis in audio description: TRACCE. In A. Remael, P. Orero & M. Carroll (Eds.), Audiovisual translation and media accessibility at the crossroads. Media for All 3 (pp. 409-425). Amsterdam: Rodopi.

 

Jiménez Hurtado, C., & Soler Gallego, S. (2013). Multimodality, translation and accessibility: A corpus-based study of audio description. In R. Baños, S. Bruti & S. Zanottic (Eds.), Corpus linguistics and audiovisual translation: In search of an integrated approach [Special issue]. Perspectives: Studies in Translatology, 21(4), 577-594.

 

Orero, P. (2008). Three different receptions of the same film: ‘The Pear Stories Project’ applied to audio description. Perspectives: Studies in Translatology, 12(2), 179- 193.

 

Szarkowska, A., & Jankowszka, A. (2012). Text-to-speech audio description of voiced-over films. A case study of audio described Volver in Polish. In E. Perego (Ed.), Emerging topics in translation: Audio description (pp. 81-98). Trieste: EUT