Access the virtual workshop here (Feb 8-9) - https://virtual.2021.aaai.org/workshop_WS-11.html

Description

As artificial intelligence has become tightly intervened in the society having tangible consequences and influence, calls for explainability and interpretability of these systems has also become increasingly prevalent. Explainable AI (XAI) attempts to alleviate concerns of transparency, trust and ethics in AI by making them accountable, interpretable and explainable to humans. This workshop aims to encapsulate these concepts under the umbrella of Explainable Agency and bring together researchers and practitioners working in different facets of explainable AI from diverse backgrounds to share challenges, new directions and recent research in the field. We especially welcome research from fields including but not limited to artificial intelligence, human-computer interaction, human-robot interaction, cognitive science, human factors and philosophy.


Challenges

XAI has received substantial but disjoint attention in different sub areas of AI, including machine learning, planning, intelligent agents, and several others. There has been limited interaction among these subareas on XAI, and even less work has focused on promoting and sharing sound designs, methods, and measures for evaluating the effectiveness of explanations (generated by AI systems) in human subject studies. This has led to uneven development of XAI, and its evaluation, in different AI subareas. We aim to address this by encouraging a shared definition of Explainable Agency and by increasing awareness of work on XAI throughout the AI research community and in related disciplines (e.g., human factors, human-computer interaction, cognitive science).


Sample of Relevant Readings

  • Fox, M., Long, D., & Magazzeni, D. (2017). Explainable planning. In D.W. Aha, T. Darrell, M. Pazzani, D. Reid, C. Sammut, & P. Stone (Eds.) Explainable AI: Papers from the IJCAI Workshop. arXiv:1709.10256. [pdf]

  • Gunning, D., Aha, D., (2019). DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Magazine, 40(2), 44-58. [pdf]

  • Lipton, Z.C., (2018). The Mythos of Model Interpretability. ACM Queue, 16, 30. [pdf]

  • Langley, P., Meadows, B., Sridharan, M., & Choi, D. (2017). Explainable agency for intelligent autonomous systems. In Proceedings of the Twenty-Ninth Annual Conference on Innovative Applications of Artificial Intelligence. San Francisco: AAAI Press. [pdf]

  • Doshi-Velez, F., & Kim, B. (2017). A roadmap for a rigorous science of interpretability. [pdf]

  • Chakraborti, T., Sreedharan, S., Kambhampati, S., (2020). The Emerging Landscape of Explainable AI Planning and Decision Making IJCAI. [pdf]

  • Anjomshoae, S., Najjar, A., Calvaresi, D., and Framling K. (2019). Explainable Agents and Robots: Results from a Systematic Literature Review. AAMAS. [pdf]

  • Miller, T., (2017). Explanation in Artificial Intelligence: Insights from the Social Sciences. In: Artificial Intelligence Journal. [pdf]

  • Kraus, S., Azaria, A., Fiosina, J., Greve, M., Hazon, N., Kolbe, L., Lembcke, T., Müller, J.P., Schleibaum, S., & Vollrath, M., (2020). AI for Explaining Decisions in Multi-Agent Environments. AAAI. [pdf]

  • Freuder E. (2017). Explaining ourselves: Human-aware constraint reasoning. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (pp. 4858-4862). San Francisco: AAAI Press.[pdf]

  • Goodman, B., & Flaxman, S. (2016). European Union regulations on algorithmic decision-making and a "right to explanation". [pdf]

  • Kulesza, T., Burnett, M., Wong, W. K., & Stumpf, S. (2015). Principles of explanatory debugging to personalize interactive machine learning. Proceedings of the Twentieth International Conference on Intelligent User Interfaces (pp. 126-137). Atlanta, GA: ACM Press.[pdf]


List of related workshops held before

  • IJCAI - Workshop on Explainable Artificial Intelligence (XAI) (2017, 2018, 2019, 2020)

  • AAMAS - Workshop on EXplainable TRansparent Autonomous Agents and Multi-Agent Systems (EXTRAAMAS) (2019, 2020)

  • ICAPS - Workshop on EXplainable AI Planning (XAIP) (2018, 2019-2020)

  • AAAI - Explanation-aware Computing (ExaCt)(2008, 2010, 2011, 2012)

  • ICML - Fairness, Accountability, and Transparency (FAT-ML)(2014-2018)

  • ICML - Workshop on Human Interpretability in Machine Learning (WHI)(2020)

  • NIPS - Interpretable ML for Complex Systems

  • CVPR-19 Workshop on Explainable AI (2019)

Topics

With this in mind, we welcome contributions on the following (and related) topic areas:

  • Explainable/Interpretable Machine Learning

  • Explainable Planning

  • Agent Policy Summarization

  • Explainable Agency

  • Human-AI Interaction

  • Human-Robot Interaction

  • Cognitive Theories

  • Philosophical Foundations

  • Interaction Design for XAI

  • XAI Evaluation

  • Fairness, Accountability and Transparency

  • XAI Domains and Benchmarks

  • Interactive Teaching Strategies and Explainability

  • Intelligent Tutoring

  • User Modeling

Workshop sponsored by

J.P. Morgan