RESEARCH

Most work related to gender bias within the machine learning community is focused on debiasing existing models such as word embeddings, coreference resolution and captioning. We propose to build a dataset from which a model can be trained to detect gender bias in text.

A model trained on a robust gender bias dataset could directly address the negative preconceived ideas people have about gender and guide human judgments by recognizing their gender biases. This could then initiate reflections on gender-sensitive topics and empower the movement of fairness and equity for all genders.

As a first step, we are following our proposed taxonomy and looking for one subtype of bias at a time. We believe in an interdisciplinary approach to tackling gender bias in text and are working closely with experts in sociolinguistics, history and nlp.

Check our current publications here.

If you have further inquiries, please feel free to contact us!




Please share on your social media: