End-to-End Audiovisual Speech Recognition

Stavros Petridis, Themos Stafylakis, Pingchuan Ma, Feipeng Cai, Georgios Tzimiropoulos, Maja Pantic

Imperial College London, University of Nottingham

This is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.

The results obtained with the proposed model on the LRW dataset* are the following:

Audio(End-to-End): 97.72%

Visual(End-to-End): 83.39%

Audiovisual(End-to-End): 98.38%

At the moment, this is the state-of-the-art performance for each modality on LRW.

*Coordinates(LRW) are (x1, y1, x2, y2) = (80, 116, 175, 211)

The results are slightly better than the ones reported in the ICASSP paper due to further fine-tuning of the models.

[Paper], [Code], [Model]