Tutorial on ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2021)
Location: KDD 2021 (Virtual Conference) Time and Date: 9:00 AM-12:00 PM, August 14th, 2021. (Singapore Time, SGT)
Slides for this tutorial: https://drive.google.com/file/d/1uRo9kmoyA9CvJwDm2z1mOp0DmVa2Jq2v/view?usp=sharing
If you have any questions, email to us: counterfactual.xai@gmail.com
Related Links:
Deep learning has shown powerful performances in many fields, however its black-box nature hinders its further applications. In response, explainable artificial intelligence emerges, aiming to explain the predictions and behaviors of deep learning models. Among many explanation methods, counterfactual explanation has been identified as one of the best methods due to its resemblance to human cognitive process: to deliver an explanation by constructing a contrastive situation so that human may interpret the underlying mechanism by cognitively demonstrating the difference.
In this tutorial, we will introduce the cognitive concept and characteristics of counterfactual explanation, its computational form, mainstream methods, and various adaptation in terms of different explanation settings. In addition, we will demonstrate several typical use cases of counterfactual explanations in popular research areas. Finally, in light of practice, we outline the potential applications of counterfactual explanations like data augmentation or conversation system. We hope this tutorial can help the participants get an overview sense of counterfactual explanations.
First Session 1.5 h :
Introduction
What is counterfactual
How to compute
– Classic explanation method
– Advanced explanation method
– Using GAN
– Metrics
(5 mins break)
Second Session 1.5 h :
Counterfactual in different areas
– Counterfactual in Natural Language Processing (NLP)
– Counterfactual in Recommendation System (RS)
– Counterfactual in Computer Vision (CV)
– Counterfactual in Graph Neural Network (GNN)
Applications of counterfactual
Conclusion
Q&A
She is an AI research engineer in the Distributed Data Lab in CSI, Huawei Technologies Co., Ltd. She received the MS degree from Hong Kong University of Science and Technology in 2019. Her research interests include explainable AI, computer vision and counterfactual explanation in computer vision.
Xiao-Hui Li
She is an AI research engineer in the Distributed Data Lab in CSI, Huawei Technologies Co., Ltd. She received the Ph.D degree from Hong Kong University of Science and Technology in 2018. Her research interests include explainable AI, quantum machine learning, and complex systems.
Han Gao
He is an AI research engineer in the Distributed Data Lab in CSI, Huawei Technologies Co., Ltd. He received the MS degree from Hong Kong University of Science and Technology in 2019. His research interests include explainable AI, GNN and GAN.
Shendi Wang
He is an AI research engineer in the Distributed Data Lab in CSI, Huawei Technologies Co., Ltd. He received the Ph.D degree from The University of Edinburgh in 2017, and MS degree from Newcastle University in 2011. His research interests include explainable AI and explainable recommendation system.
Luning Wang
He is an AI research engineer in the Distributed Data Lab in CSI, Huawei Technologies Co., Ltd. He received the Ph.D degree from The City University of Hong Kong in 2020. His research interests include explainable AI, fairness in AI, and data mining.
He received the Ph.D degree in Computer Science from The Hong Kong University of Science and Technology in 2014 and the B.E. degree in Telecommunication Engineering from Beijing University of Posts and Telecommunications in 2010. He is now a specialist in the Distributed Data Lab in CSI, Huawei Technologies Co., Ltd. His research interests include explainable AI, financial technologies and computational sociology.
He received the BS degree in computer science and engineering from Tianjin University, China, in 1994, the MA degree from the Asian Institute of Technology, Thailand, in 1997, and the PhD degree in computer science from the University of Waterloo, Canada, in 2005. He is now a professor with the Department of Computer Science and Engineering, Hong Kong University of Science and Technology. His research interests include human-powered machine learning, crowdsourcing, uncertain and probabilistic databases, multimedia and time series databases, and privacy protection. He is a fellow of the IEEE.
Molnar, C. (2020). Interpretable machine learning. Lulu. com.
Byrne, R. M. (2019, August). Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning. In IJCAI (pp. 6276-6282).
Byrne, R. M. (2007). The rational imagination: How people create alternatives to reality. MIT press.
Li, X. H., Cao, C. C., Shi, Y., Bai, W., Gao, H., Qiu, L., ... & Chen, L. (2020). A Survey of Data-driven and Knowledge-aware eXplainable AI.IEEE Transactions on Knowledge and Data Engineering.
Pearl, J., & Mackenzie, D. (2018). The book of why: the new science of cause and effect. Basic books.
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL \& Tech. 31 (2017), 841.
Mc Grath, R., Costabello, L., Le Van, C., Sweeney, P., Kamiab, F., Shen, Z., & Lecue, F. (2018, December). Interpretable Credit Application Predictions With Counterfactual Explanations. In NIPS 2018-Workshop on Challenges and Opportunities for AI in Financial Services: the Impact of Fairness, Explainability, Accuracy, and Privacy.
Mothilal, R. K., Sharma, A., & Tan, C. (2020, January). Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 607-617).
Russell, C. (2019, January). Efficient search for diverse coherent explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 20-28).
Van Looveren, A., & Klaise, J. Interpretable Counterfactual Explanations Guided by Prototypes. Age, 46, 46.
Liu, S., Kailkhura, B., Loveland, D., & Han, Y. (2019, November). Generative counterfactual introspection for explainable deep learning. In 2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP) (pp. 1-5). IEEE.
Van Looveren, A., Klaise, J., Vacanti, G., & Cobb, O. (2021). Conditional Generative Models for Counterfactual Explanations. arXiv preprint arXiv:2101.10123.
Olson, M. L., Khanna, R., Neal, L., Li, F., & Wong, W. K. (2021). Counterfactual state explanations for reinforcement learning agents via generative deep learning. Artificial Intelligence, 295, 103455.
Yue, Z., Wang, T., Sun, Q., Hua, X. S., & Zhang, H. (2021). Counterfactual zero-shot and open-set visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 15404-15414).
Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608.
Mothilal, R. K., Sharma, A., & Tan, C. (2020, January). Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 607-617).
Le, T., Wang, S., & Lee, D. (2020, August). GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model's Prediction. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 238-248).
Hsiao, J. H. W., Ngai, H. H. T., Qiu, L., Yang, Y., & Cao, C. C. (2021). Roadmap of Designing Cognitive Metrics for Explainable Artificial Intelligence (XAI). arXiv preprint arXiv:2108.01737.
Martens, D., & Provost, F. (2014). Explaining data-driven document classifications. MIS quarterly, 38(1), 73-100.
Yang, L., Kenny, E., Ng, T. L. J., Yang, Y., Smyth, B., & Dong, R. (2020, December). Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification. In Proceedings of the 28th International Conference on Computational Linguistics (pp. 6150-6160).
Wu, T., Ribeiro, M. T., Heer, J., & Weld, D. S. (2021). Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics.
Ghazimatin, A., Balalau, O., Saha Roy, R., & Weikum, G. (2020, January). PRINCE: Provider-side interpretability with counterfactual explanations in recommender systems. In Proceedings of the 13th International Conference on Web Search and Data Mining (pp. 196-204).
Ghazimatin, A., Pramanik, S., Saha Roy, R., & Weikum, G. (2021, April). ELIXIR: Learning from User Feedback on Explanations to Improve Recommender Models. In Proceedings of the Web Conference 2021 (pp. 3850-3860).
Kaffes, V., Sacharidis, D., & Giannopoulos, G. (2021, June). Model-Agnostic Counterfactual Explanations of Recommendations. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (pp. 280-285).
Dhurandhar, A., Chen, P. Y., Luss, R., Tu, C. C., Ting, P., Shanmugam, K., & Das, P. (2018). Explanations based on the missing: Towards contrastive explanations with pertinent negatives. arXiv preprint arXiv:1802.07623.
Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., & Lee, S. (2019, May). Counterfactual visual explanations. In International Conference on Machine Learning (pp. 2376-2384). PMLR.
Akula, A., Wang, S., & Zhu, S. C. (2020, April). Cocox: Generating conceptual and counterfactual explanations via fault-lines. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 03, pp. 2594-2601).
Zhou, J., Cui, G., Hu, S., Zhang, Z., Yang, C., Liu, Z., ... & Sun, M. (2020). Graph neural networks: A review of methods and applications. AI Open, 1, 57-81.
Ying, R., Bourgeois, D., You, J., Zitnik, M., & Leskovec, J. (2019). Gnnexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32, 9240.
Lucic, A., ter Hoeve, M., Tolomei, G., de Rijke, M., & Silvestri, F. (2021). CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks. arXiv preprint arXiv:2102.03322.
Sun, Y., Valente, A., Liu, S., & Wang, D. (2021). Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation. arXiv preprint arXiv:2103.13944.
Yeh, R. A., Schwing, A. G., Huang, J., & Murphy, K. (2019). Diverse generation for multi-agent sports games. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4610-4619).
Sokol, K., & Flach, P. A. (2018, July). Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant. In IJCAI (pp. 5868-5870).
Sokol, K., & Flach, P. A. (2018, July). Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements. In IJCAI (pp. 5785-5786).
Sokol, K., & Flach, P. A. (2019, January). Counterfactual explanations of machine learning predictions: opportunities and challenges for AI safety. In SafeAI@ AAAI.
Madaan, N., Padhi, I., Panwar, N., & Saha, D. (2020). Generate your counterfactuals: Towards controlled counterfactual generation for text. arXiv preprint arXiv:2012.04698.
This section introduces the papers not included in the tutorial.
Dandl, S., Molnar, C., Binder, M., & Bischl, B. (2020, September). Multi-objective counterfactual explanations. In International Conference on Parallel Problem Solving from Nature (pp. 448-469). Springer, Cham.
Notes: tabular data, multi-objective optimization problem, plausible, NSGA-II
Poyiadzi, R., Sokol, K., Santos-Rodriguez, R., De Bie, T., & Flach, P. (2020, February). FACE: Feasible and actionable counterfactual explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 344-350).
Notes: tabular data, feasible path, construct graph, density estimator, and conditions function
Mahajan, D., Tan, C., & Sharma, A. (2019). Preserving causal constraints in counterfactual explanations for machine learning classifiers. 2019 NeurIPS Workshop on Do the right thing: Machine learning and Causal Inference for improved decision making
Notes: tabular data, causal constraints, generative model, SCM,
Last update: 19/NOV/2021 by Cong WANG