Accepted Papers

Flexible Interpretability through Optimizable Counterfactual Explanations for Tree Ensembles

Ana Lucic, Harrie Oosterhuis, Hinda Haned and Maarten de Rijke


Evaluating Off-Policy Evaluation: Sensitivity and Robustness

Yuta Saito, Takuma Udagawa, Haruka Kiyohara, Kazuki Mogi, Yusuke Narita and Kei Tateno


Representational Harms in Image Tagging

Jared Katzman, Solon Barocas, Su Lin Blodgett, Kristen Laird, Morgan Klaus Scheuerman and Hanna Wallach


Independent Ethical Assessment of Text Classification Models: A Hate Speech Detection Case Study

Amitoj Singh, Jingshu Chen, Lihao Zhang, Amin Rasekh, Ilana Golbin and Anand Rao


Intersectional Bias in Causal Language Models

Liam Magee, Lida Ghahremanlou, Karen Soldatic and Shanthi Robertson


Enabling Flexible Downstream Fairness With Geometric Repair

Jessica Dai, Kweku Kwegyir-Aggrey, Keegan Hines and John Dickerson


Identifying Biased Subgroups in Ranking and Classification

Eliana Pastor, Luca de Alfaro and Elena Baralis


Evaluating Gender Bias in Hindi-English Machine Translation

Gauri Gupta, Krithika Ramesh and Sanjay Singh


Fairness for Text Classification Tasks with Identity Information Data Augmentation Methods

Mohit Wadhwa, Mohan Bhambhani, Ashvini Jindal, Uma Sawant and Ram Madhavan


PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

SĂ­lvia Casacuberta, Esra Suel and Seth Flaxman


An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness

Moninder Singh, Gevorg Ghalachyan, Kush Varshney and Reginald Bryant


Measurement as governance in and for responsible AI

Abigail Jacobs


Monitoring fairness in machine learning models that predict patient mortality in the ICU

Tempest van Schaik, Xinggang Liu, Louis Atallah and Omar Badawi



Registration Information: At least one author per submission is required to register for KDD (either full conference or workshops only) and attend the workshop.