My research deals with language as it is expressed through multiple modalities, such as visual (e.g., gestures and facial expressions) and auditory (speech). Below an overview of the projects I worked on.
I am interested in the various mechanisms that make speech interdependent to body movements/gestures in terms of communicative and prosodic needs, cognitive planning mechanisms as well as biomechanics. In my PhD project, I investigated the effects of encouraging and restraining the use of gestures on voice and prosody.
I worked with the supervision of Maria Grazia Busà (Language and Communication Lab, University of Padova, Italy) and Pilar Prieto (ICREA-UPF, GREP, Group of Prosodic Studies, Universitat Pompeu Fabra, Barcelona).
My studies provide evidence that, in semi-spontaneous narrative speech, encouraging the use of gesture affects some prosodic features of speech (as evidenced by an increase in F0 maxima and intensity metrics); on the other hand, the inability to gesture is not detrimental to fluent speech production and spoken prosody. These results obtained in a naturalistic setting open up some further discussion on the various mechanisms that make speech interdependent to body movements/gestures in terms of communicative and prosodic needs (and neural-cognitive planning mechanisms) as well as motoric control and biomechanics.
In the future, I would be interested in investigating possibilities of leveraging the visual modes of language in different areas such as first and second language acquisition as well as communication in language-impaired populations.
Relevant publications
Cravotta, A., (2019) Restraining and Encouraging the Use of Gesture: Exploring the Effects on Speech, PhD dissertation [pdf]
Cravotta, A., Busà, M. G., & Prieto, P. (2019). Effects of Encouraging the Use of Gestures on Speech. Journal of Speech, Language, and Hearing Research, 62(9), 3204 -3219 [pdf]
Cravotta A., Prieto, P., Busà, M.G. (2019), Encouraging Gesture Use in a Narration Task Increases Speakers' Gesture Rate, Gesture Salience and the Production of Representational Gestures. In Proceedings of the 6th GESPIN conference (pp. 21-26) [pdf]
Cravotta, A., Prieto, P., & M.G., Busà, (under review). Exploring the Effects of Restraining the Use of Gestures on Speech
In 2016 I worked as a research assistant in the European FP7 project Slándáil (Security System for language and image analysis) coordinated by Prof. Khurshid Ahmad (Trinity College Dublin). The project's main goal was to develop and deliver an emergency management platform/system that aggregates and makes use of information available in the social media (e.g., video, images, texts, speech, visual icons, meta data). I was part of a team working on speech and non-verbal communication analytics. Our work focused on the communication of emotions by emergency first responders, local authorities and TV-news reporters in natural disaster situations. My activity involved: collecting video data, annotating gesture and carrying out acoustic analysis for emotive speech assessment; testing iMotions platform for facial expression of emotions.
Relevant publications
Busà, M. G., Cravotta, A. (July, 2016). Can reporters' involvement in the news be detected by looking at their gestures and listening to their pitch? ISGS 7th Conference of the International Society for Gesture Studies (Paris, France). [pdf]
Busà, M. G., & Cravotta, A. (2016). Detecting Emotional Involvement in Professional News Reporters: An Analysis of Speech and Gestures. In Proceedings of the LREC 2016 EMOT Workshop: Emotions, Metaphors, Ontology and Terminology during Disasters (Portorož, Slovenia). [pdf]
Analyzing human multimodal language implies taking into consideration several visual cues that accompany speech (e.g., gestures). The analysis of visual signals often requires a detailed manual annotation of video data, which is a labor-intensive and time-consuming process.
I collaborated with Naoto Ienaga (Keio University, Tokyo) to develop a toolkit for the automatic detection and annotation of hand gestures in video data. We used a state-of-the-art pose estimation method (OpenPose) and machine learning.
Relevant publications
Ienaga, N., Cravotta, A., Terayama, K., Scotney, B., Saito, H., Busà, M.G., (2022) Semi-automation of gesture annotation by machine learning and human collaboration. Language Resources and Evaluation, 1-28. [pdf]
Ienaga, N., Scotney, B., Saito, H., Cravotta, A., Busà M. G., (2018). Natural Gesture Extraction Based on Hand Trajectory. In Proceedings of the 20th Irish Machine Vision and Image Processing conference (IMVIP) (pp. 81-88). [pdf]