The Common Voice Contributor Capability Assessment evaluates your understanding of best practices for contributing sentences to the Mozilla Common Voice dataset.
Successfully completing this assessment may unlock access to Common Voice sentence contribution workflows on Effect Alpha.
Mozilla Common Voice is an open-source initiative led by the Mozilla Foundation to build publicly available voice datasets that help improve speech recognition technology.
Unlike proprietary datasets, Common Voice is designed to be:
Open and accessible
Community-driven
Representative of diverse languages, accents, and speaking styles
These datasets are used by researchers, developers, and organizations around the world to train and improve speech AI systems.
By contributing sentences and recordings, participants help create more inclusive and accurate voice technologies that better represent real people.
Learn more: https://commonvoice.mozilla.org/
Effect AI collaborates with the Mozilla ecosystem by contributing structured datasets created through the Effect Alpha workforce.
These contributions help:
Expand available training data
Improve linguistic diversity
Support open AI development
Explore Effect AI’s dataset contributions:
https://datacollective.mozillafoundation.org/datasets/cmkfm9fbl00nto0070sdcrak2
Your participation directly supports real datasets that become part of the broader open voice ecosystem.
The Mozilla Data Collective is a public platform where datasets are shared and documented for transparency and collaboration.
It provides:
Access to open datasets
Visibility into contributors and projects
Resources for researchers and developers building speech technologies
Explore the Data Collective:
This test evaluates your ability to create high-quality sentences suitable for speech datasets.
Focus areas include:
Text generation quality
Linguistic integrity
Policy compliance
Readability and natural speech
These standards help ensure contributed sentences are useful, safe, and representative of real-world language.
The assessment consists of 30 total questions covering:
Sentence Purpose & Dataset Integrity
Originality & Copyright
Sentence Structure & Length
Readability & Speakability
Numbers, Symbols & Formatting
Grammar, Clarity & Neutrality
Bias, Safety & Content Restrictions
Language Consistency & Locale
Dataset Diversity & Usefulness
Final Quality Judgment
Estimated duration: 10–15 minutes.
Please review carefully before starting.
Each question includes a countdown timer.
If time runs out, your current answer is automatically submitted.
The assessment advances to the next question.
Once a question is submitted, you cannot return to it.
Previous answers cannot be changed.
To pass the assessment:
You must score at least 80% (24 out of 30 correct answers).
You have 3 total attempts for this assessment.
Important:
Starting the test counts as one attempt.
Interrupted sessions still count toward attempt limits.
Additional attempts are not available once all attempts are used. The Effect AI Team cannot reset your attempts.
Before starting:
Use a stable internet connection.
Avoid switching devices or browsers during the test.
Ensure you can complete the assessment without interruption.
Begin from the Capability Marketplace.
Read each scenario carefully.
Apply Common Voice contribution guidelines.
Work steadily within the timed format.
After completion:
Your responses are evaluated based on dataset guidelines and policy standards.
If you pass:
The Common Voice Contributor capability may be granted.
Sentence contribution workflows may become available.
After unlocking:
Return to your dashboard.
Check for newly available Common Voice contribution tasks.
Once the assessment is completed:
Your score is calculated.
Eligible capabilities can be claimed.
Access updates may appear shortly afterward.
If something seems unclear:
Review the Common Voice task guides.
Check Discord announcements.
Ask questions in Alpha support channels.