Comparison between traditional (closed-set) and proposed Open-DeBias QA framework. Closed-set QA systems are limited to predefined biases (e.g., gender, age) and fail to detect or mitigate emerging ones like brand or location. In contrast, our framework enables open-set bias detection and mitigation, enabling fair and unbiased answers across a wide range of bias categories, including those unseen during training. It also generalizes effectively across languages.
Large Language Models (LLMs) have achieved remarkable success on question answering (QA) tasks, yet they often encode harmful biases that compromise fairness and trustworthiness. Most existing bias mitigation approaches are restricted to predefined categories, limiting their ability to address novel or contextspecific emergent biases. To bridge this gap, we tackle the novel problem of open-set bias detection and mitigation in text-based QA. We introduce OpenBiasBench, a comprehensive benchmark designed to evaluate biases across a wide range of categories and subgroups, encompassing both known and previously unseen biases. Additionally, we propose Open-DeBias, a novel, data-efficient, and parameter-efficient debiasing method that leverages adapter modules to mitigate existing social and stereotypical biases while generalizing to unseen ones. Compared to the state-of-the-art BMBI method, Open-DeBias improves QA accuracy on BBQ dataset by nearly 48% on ambiguous subsets and 6% on disambiguated ones, using adapters fine-tuned on just a small fraction of the training data. Remarkably, the same adapters, in a zero-shot transfer to Korean BBQ, achieve 84% accuracy, demonstrating robust language-agnostic generalization. Through extensive evaluation, we also validate the effectiveness of Open-DeBias across a broad range of NLP tasks, including StereoSet and CrowSPairs, highlighting its robustness, multilingual strength, and suitability for general-purpose, open-domain bias mitigation.
Performance comparison of DeBERTa-V3- Large + OpenDeBias (ours) and DeBERTa-V3- Large + BMBI on BBQ dataset. Our method shows improvements in both ambiguous (Amb) and disambiguous (Disamb) cases with a lower Bias Score (BS) and high Accuracy (Acc). The categories in bold indicate the ones used for adapter training.
Benchmarking our dataset OpenBiasBench with RACE, pretrained model (PT) , and our method. The table shows performance on unseen OpenBiasBench categories, with adapters trained on a different set of categories. Our method outperforms RACE and PT of DeBERTa-V3-Large and RoBERTa-Large, across social and contextual biases.
Qualitative examples from our curated OpenBiasBench Dataset