Challenge Evaluation
The submissions will be evaluated on naturalness and speaker similarity scores, for mono lingual and cross lingual synthesis.
Each submission will involve ratings from overall 144 utterances, with equal split across mono lingual and cross lingual synthesis. The breakdown is shown below.
Each submission will be evaluated by multiple evaluators, native to the target language.
If there are more than 10 submissions, the top ten teams will be first selected based on objective scores (Charecter Error Rate). The subjective evaluation will be done for the top 10 teams (Naturalness and Speaker Similarity)