Building Generative Predictive Transformers for Sign Language
An EPSRC Programme Grant
1/5/2025-30/4/2030
SignGPT is a UKRI EPSRC Programme Grant focused on the development of an AI-powered translation system capable of unconstrained, bidirectional translation between British Sign Language (BSL) and English. The project will build the first generative predictive transformer for sign language, combining computer vision, sign linguistics, and machine learning. It is led by an interdisciplinary team from the University of Surrey, University of Oxford, and University College London, with direct involvement from Deaf organisations and community partners.
The vision of this Programme Grant is to develop AI and machine learning tools for seamless translation between signed and spoken languages. We aim to enable automatic translation of spoken language into photo-realistic sign language, and vice versa—from video signing to spoken output. This will require advancing sign recognition, synthetic sign generation, and building a conversational sign language model for British Sign Language (BSL).
To support this, we will curate the world’s largest sign language dataset and use it to train a SignGPT model, offering the Deaf community the same transformative benefits large language models (LLMs) have brought to spoken/written language. This includes tools for annotation, language learning, assessment, and automatic interpretation—forming a Visual Language Toolkit.
These technologies will enable accessible, two-way communication between Deaf and hearing individuals—like a signer communicating through a phone using synthetic avatars and speech recognition. While BSL is the primary focus, the tools developed will support broader applications in sign, gesture, and non-verbal communication. The project will also explore other signed and spoken languages in later phases.
Contact r.bowden@surrey.ac.uk or s.emery@surrey.ac.uk to get more information on the project