2nd International Workshop on
Automatic Translation for Signed and Spoken Languages


Machine Translation (MT) is a core technique for reducing language barriers for spoken languages. Although MT has come a long way since its inception in the 1950s, it still has a long way to go to successfully cater to all communication needs and users. When it comes to the deaf and hard of hearing communities, MT is in its infancy. The complexity of the task to automatically translate between SLs or sign and spoken languages, requires a multidisciplinary approach.

The rapid technological and methodological advances in deep learning, and in AI in general, that we see in the last decade, have not only improved MT, recognition of image, video and audio signals, the understanding of language, the synthesis of life-like 3D avatars, etc., but have also led to the fusion of interdisciplinary research innovations that lays the foundation of automated translation services between sign and spoken languages.

The second edition of the AT4SSL aims to be a venue for presenting and discussing (complete, ongoing or future) research on automatic translation between sign and spoken languages and bring together researchers, practitioners, interpreters and innovators working in related fields.

The AT4SSL workshop aims to open a (guided) discussion between participants about current challenges, innovations and future developments related to the automatic translation between sign and spoken languages. To this extent, AT4SSL will host a moderated round table around the following three topics: (i) quality of recognition and synthesis models and user-expectations; (ii) co-creation - deaf, hearing and hard-of-hearing people joining forces towards a common goal and (iii) sign-to-spoken and spoken-to-sign translation technology in media. 


Data is one of the key factors for the success of today’s AI, including language and translation models for sign and spoken languages. However, when it comes to SL, MT and Natural Language Processing, we face problems related to small volumes of (parallel) data, low veracity in terms of origin of annotations (deaf or hearing interpreters), non-standardized annotations (e.g. glosses differ across corpora), video quality or recording setting, and others. 

The theme of this edition of the workshop is Sign language parallel data – challenges, solutions and resolutions.


This workshop aims to focus on the following topics. However, submissions related to the general topic of automatic translation between signed and spoken languages that deviate from these topics are also welcome:



This edition is co-organised by the SignON and EASIER projects!


Interpreting between English and International Sign will be provided.


Dimitar Shterionov, workshop chair: d.shterionov@tilburguniversity.edu


Registration will be handled by the EAMT conference. (To be announced)