Audio Samples for:

Multi-reference Neural TTS Stylization with Adversarial Cycle Consistency

Paper: Multi-reference Neural TTS Stylization with Adversarial Cycle Consistency

Authors: Matt Whitehill, Shuang Ma, Daniel McDuff, Yale Song

Abstract: Current multi-reference style transfer models for Text-to-Speech (TTS) perform sub-optimally on disjoints datasets, where one dataset contains only a single style class for one of the style dimensions. These models generally fail to produce style transfer for the dimension that is underrepresented in the dataset. In this paper, we propose an adversarial cycle consistency training scheme with paired and unpaired triplets to ensure the use of information from all style dimensions. During training, we incorporate unpaired triplets with randomly selected reference audio samples and encourage the synthesized speech to preserve the appropriate styles using adversarial cycle consistency. We use this method to transfer emotion from a dataset containing four emotions to a dataset with only a single emotion. This results in a 78% improvement in style transfer (based on emotion classification) with minimal reduction in fidelity and naturalness. In subjective evaluations our method was consistently rated as closer to the reference style than the baseline.

Our adversarial cycle consistency training scheme for unpaired samples in a two-reference model. Paired samples are trained with the same scheme and same components, except the synthesized samples are not re-encoded, i.e., the orange dashed lines do not exist for paired samples.

Audio Samples

The dataset used for our study contains 2 speakers. Speaker 1 has training samples in only the neutral emotion. Speaker 2 has training samples for neutral, angry, happy, and sad. The goal in this work is to transfer emotion from Speaker 2 to Speaker 1.

Below we have synthesized 2 different text samples from the Speaker 1 test set in the 4 different emotions. The first column contains the emotion reference (a sample from Speaker 2) that was used to determine the emotion in the synthesized sample. Column 2 contains the synthesized sample from the baseline, column 3 contains the synthesized sample from our model.

Text 1 - "They've provided no leadership, shown no courage at all."

Speaker 1 Reference

Neutral - Reference

Neutral - Baseline

Neutral - Our Model

Angry - Reference

Angry - Baseline

Angry - Our Model

Happy - Reference

Happy - Baseline

Happy - Our Model

Sad - Reference

Sad - Baseline

Sad - Our Model

Text 2 - "Here are some movies named The Green Mile."

Speaker 1 Reference

Neutral - Reference

Neutral - Baseline

Neutral - Our Model

Angry - Reference

Angry - Baseline

Angry - Our Model

Happy - Reference

Happy - Baseline

Happy - Our Model

Sad- Reference

Sad - Baseline

Sad - Our Model