Human Uncertainty in Concept-Based AI Systems
Katherine M. Collins, Matthew Barker*, Mateo Espinosa Zarlenga*, Naveen Raman*, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, and Krishnamurthy (Dj) Dvijotham
Abstract
Placing a human in the loop may abate the risks of deploying AI systems in safety-critical settings (e.g., a clinician working with a medical AI system). However, mitigating risks arising from human error and uncertainty within such human-AI interactions is an important and understudied issue. In this work, we study human uncertainty in the context of concept-based models, a family of AI systems that enable human feedback via concept interventions where an expert intervenes on human-interpretable concepts relevant to the task. Prior work in this space often assumes that humans are oracles who are always certain and correct. Yet, real-world decision-making by humans is prone to occasional mistakes and uncertainty. We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans. We show that training with uncertain concept labels may help mitigate weaknesses of concept-based systems when handling uncertain interventions. These results allow us to identify several open challenges, which we argue can be tackled through future multidisciplinary research on building interactive uncertainty-aware systems. To facilitate further research, we release a new elicitation platform, UElic, to collect uncertain feedback from humans in collaborative prediction tasks.
Resources
ArXiv Paper: https://arxiv.org/abs/2303.12872
CUB-S Data: https://github.com/cambridge-mlg/cifar-10s
UElic Interface Code: [FORTHCOMING]
Computational Experiment Code: [FORTHCOMING]
Citing Us
Please cite us using the following BibTex entry if you use our data, code, or elicitation interface. Thank you :)
@inproceedings{collins2023humanConceptUnc,
title={Human Uncertainty in Concept-Based AI Systems},
author={Katherine M. Collins and Matthew Barker and Mateo Espinosa Zarlenga and Naveen Raman and Umang Bhatt and Mateja Jamnik and Ilia Sucholutsky and Adrian Weller and Krishnamurthy Dvijotham},
year={2023},
archivePrefix={AIES},
}
Contact: kmc61@cam.ac.uk