The AAAI-22 WORKSHOP ON

INTERACTIVE MACHINE LEARNING


Recent years have witnessed growing interest in the interface between human endeavours and AI systems with the increasing realisation that machines can indeed meet objectives specified -- but the real question is have they been given the right objectives. A central topic in this area is Interactive Machine Learning (IML), which is concerned with the development of algorithms for enabling machines to cooperate with human agents for the purpose of solving a shared prediction, learning, or teaching task.

Key questions in IML revolve around how to best integrate people into the learning loop in a way that is transparent, efficient, and beneficial to the human-AI team as a whole, supporting different requirements and users with different levels of expertise. The introduction of IML solutions also creates huge challenges to allow users to understand the impact of their influence to instil trust and confidence. Additionally, the ability to allow users or model creators to edit presents risks of introducing biases or unplanned consequences.

IML is highly multi-disciplinary and spans across a variety of topics and applications, including human-computer interaction, recommender systems, natural language processing, computer vision and robotics. Advances in IML promise to make AIs more accessible and controllable, more compatible with the values of their human partners, more human-aware and more trustworthy. Such advances would enrich the range of applicability of semi-autonomous systems to real-world tasks, most of which involve cooperation with one or more human partners.

Despite its potential, knowledge transfer between different sub topics of IML and between research and applications has been limited. We believe that with the recent advancements in explainable technologies and the growing attention given to the issue of interaction between human and artificial agents, now is the time to fill this gap by bringing together researchers from industry and academia and from different disciplines in AI and surrounding areas. This is the main purpose of this workshop.

TOPICS

We invite submissions on a range of topics, including but not limited to:

  • Strategies for traditional settings: active and imitation learning, interactive recommendation;

  • Interactive strategies for non-standard settings: online learning; hierarchical and constrained prediction; generative models; feature and concept acquisition; data wrangling and cleaning;

  • Interactive multi-objective optimization algorithms and techniques;

  • Novel mechanisms for eliciting and consuming user feedback;

  • Understandable interaction, especially in the context of uncovering and debugging undesirable behavior;

  • HCI and visualization challenges in supporting user interaction;

  • Analysis of human factors and cognition;

  • Personalisation and user modelling;

  • Human-initiated and mixed-initiative interaction protocols;

  • Design, testing and assessment of interactive machine learning systems;

  • Studies on risks of introducing interaction mechanisms such as information leakage and bias;

  • Business use cases and novel applications of interactive machine learning.

CALL FOR PAPERS

Long papers (up to 6 pages + references) and extended abstracts (2 pages + references) are welcome, including resubmissions of already accepted papers, work-in-progress, and position papers. The review process will be double blind [not single blind]. All submissions should be formatted using the AAAI-22 Author Kit, linked to below.

All accepted papers will be presented as posters and linked to on the workshop page. The best contributions will be allocated a 15min presentation during the workshop to maximize their visibility and impact.


ACCEPTED PAPERS

  • Ido Shapira and Amos Azaria. A Socially Aware Reinforcement Learning Agent for The Single Track Road Problem. (arxiv)

  • Manuela Pollak, Andrea Salfinger and Karin Anna Hummel. Teaching drones on-the-fly: Can emotional feedback serve as learning signal for training artificial agents? (arxiv)

  • Yujiang He, Zhxin Huang and Bernhard Sick. Design of Explainability Module with Experts in the Loop for Visualization and Dynamic Adjustment of Continual Learning. (arxiv)

  • Mareike Hartmann, Aliki Anagnostopoulou and Daniel Sonntag. Interactive Machine Learning for Image Captioning. (arxiv)

  • Lincen Yang and Matthijs van Leeuwen. Probabilistic Rule Set Ready for Interactive Machine Learning. (Extended Abstract)

  • Sravan Jayanthi, Letian Chen and Matthew Gombolay. Strategy Discovery and Mixture in Lifelong Learning from Heterogeneous Demonstration. (arxiv)

  • Chace Hayhurst, Hyojae Park, Atrey Desai, Suheidy De Los Santos and Michael Littman. Reinforcement Learning As End-User Trigger-Action Programming. (Extended Abstract)

  • Andrea Bontempelli, Fausto Giunchiglia, Andrea Passerini and Stefano Teso. Toward a Unified Framework for Debugging Gray-box Models. (arxiv)

  • Jasmina Gajcin, Rahul Nair, Tejaswini Pedapati, Radu Marinescu, Elizabeth Daly and Ivana Dusparic. Contrastive Explanations for Comparing Preferences of Reinforcement Learning Agents. (arxiv)

  • Federico Malato, Joona Jehkonen and Ville Hautamaki. Improving Behavioural Cloning with Human-Driven Dynamic Dataset Augmentation. (arxiv)

  • Lu Yin, Vlado Menkovski, Yulong Pei and Mykola Pechenizkiy. Semantic-Based Few-Shot Learning by Interactive Psychometric Testing. (arxiv)

  • Stefano Teso and Antonio Vergari. Efficient and Reliable Probabilistic Interactive Learning with Structured Outputs. (arxiv)

  • Niket Tandon, Aman Madaan, Peter Clark, Keisuke Sakaguchi and Yiming Yang. INTERSCRIPT: A dataset for interactive learning of scripts through error feedback. (arxiv)

Keynote Speakers

Andreas Holzinger, Medical University Graz

Bio: Andreas Holzinger is the lead of the Human-Centered AI Lab, Institute for Medical Informatics and Statistics at the Medical University Graz, Austria, and a visiting professor at the Alberta Machine Intelligence Institute in Edmonton, Canada. He is a full member of the European Lab for Learning and Intelligent Systems. Andreas Holzinger works on Human-Centered AI, motivated by efforts to improve human health. Andreas pioneered in interactive machine learning with the human-in-the-loop. For his achievements, he was elected as a member of Academia Europea in 2019. Andreas obtained a Ph.D. in Cognitive Science from Graz University in 1998, and his second Ph.D. in Computer Science from Graz University of Technology in 2003. Andreas was Visiting Professor for Machine Learning & Knowledge Extraction in Verona, Italy, RWTH Aachen, Germany, University College London and Middlesex University London, UK. Since 2016 Andreas is Visiting Professor for Machine Learning in Health Informatics at the Faculty of Informatics at Vienna University of Technology. Andreas is paving the way towards multi-modal causability, promoting robust, interpretable and trustworthy medical AI, advocating for a synergistic approach to put the human-in-control of AI, and align AI with human values, privacy, security, and safety. AI/Machine Learning is remarkably successful and even outperforms humans in certain tasks. Humans, on the other hand, are experts in multi-modal reasoning and can embed new information input into a conceptual knowledge space shaped by experience. Andreas Holzinger's vision is to create systems capable of explaining themselves, engaging a human expert in interactive counterfactual “what if” questions. Andreas Holzinger’s Leitmotiv is that using conceptual knowledge as a guiding model of reality will help to train more robust, interpretable, less biased machine learning models, ideally able to learn from fewer data.

Slides

Cynthia Rudin, Duke University

Bio: Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University, and directs the Prediction Analysis Lab, whose main focus is in interpretable machine learning. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the "Top 40 Under 40" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics.

Simone Stumpf, University of Glasgow

Bio: Dr. Simone Stumpf is a Reader in Responsible and Interactive AI at University of Glasgow, UK. She has a long-standing research focus on user interactions with machine learning systems. Her current research includes self-management systems for people living with long-term conditions, developing teachable object recognisers for people who are blind or low vision, and investigating AI fairness. Her work has contributed to shaping the field of Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles for enabling better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower all users to use intelligent systems effectively.

schedule

The workshop will take place on Monday, February 28th on the AAAI-22 Virtual Chair platform, see here (located in Room Blue 9).


  • 9:00 AM Introduction

  • 9:10 Keynote 1: Simone Stumpf

  • 10:00 Break

  • 10:10 Session 1:

    • Manuela Pollak, Andrea Salfinger and Karin Anna Hummel. Teaching drones on-the-fly: Can emotional feedback serve as learning signal for training artificial agents? (arxiv)

    • Niket Tandon, Aman Madaan, Peter Clark, Keisuke Sakaguchi and Yiming Yang. INTERSCRIPT: A dataset for interactive learning of scripts through error feedback. (arxiv)

    • Andrea Bontempelli, Fausto Giunchiglia, Andrea Passerini and Stefano Teso. Toward a Unified Framework for Debugging Gray-box Models. (arxiv)

    • Ido Shapira and Amos Azaria. A Socially Aware Reinforcement Learning Agent for The Single Track Road Problem. (arxiv)

  • 11:00 Break

  • 11:10 Keynote 2: Andreas Holzinger

  • 12:00 Ice breaking

  • Lunch

  • 1:30 PM Session 2:

    • Stefano Teso and Antonio Vergari. Efficient and Reliable Probabilistic Interactive Learning with Structured Outputs. (arxiv)

    • Federico Malato, Joona Jehkonen and Ville Hautamaki. Improving Behavioural Cloning with Human-Driven Dynamic Dataset Augmentation. (arxiv)

    • Jasmina Gajcin, Rahul Nair, Tejaswini Pedapati, Radu Marinescu, Elizabeth Daly and Ivana Dusparic. Contrastive Explanations for Comparing Preferences of Reinforcement Learning Agents. (arxiv)

    • Yujiang He, Zhxin Huang and Bernhard Sick. Design of Explainability Module with Experts in the Loop for Visualization and Dynamic Adjustment of Continual Learning. (arxiv)

  • 2:20 Break

  • 2:30 Keynote 3: Cynthia Rudin

  • 3:20 Break

  • 3:30 Session 3:

    • Sravan Jayanthi, Letian Chen and Matthew Gombolay. Strategy Discovery and Mixture in Lifelong Learning from Heterogeneous Demonstration. (arxiv)

    • Mareike Hartmann, Aliki Anagnostopoulou and Daniel Sonntag. Interactive Machine Learning for Image Captioning. (arxiv)

    • Lu Yin, Vlado Menkovski, Yulong Pei and Mykola Pechenizkiy. Semantic-Based Few-Shot Learning by Interactive Psychometric Testing. (arxiv)

    • Lincen Yang and Matthijs van Leeuwen. Probabilistic Rule Sets Ready for Interactive Machine Learning (Extended Abstract).

    • Chace Hayhurst, Hyojae Park, Atrey Desai, Suheidy De Los Santos and Michael Littman. Reinforcement Learning As End-User Trigger-Action Programming (Extended Abstract)

  • 4:20 Final Session

Program Committee

  • Julius Adebayo, MIT

  • Katrien Beuls, Vrije Universiteit Brussel

  • Mustafa Bilgic, Illinois Institute of Technology

  • Peter Flach, University of Bristol

  • Bradley Hayes, University of Colorado

  • Jose Hernandez-Orallo, Universitat Politecnica de Valencia

  • Shalmali Joshi, Harvard University

  • Siddharth Karamcheti, Stanford University

  • Kristian Kersting, TU Darmstadt

  • Daniel Kottke, Kassel University

  • Todd Kulesza, Google

  • Piyawat Lertvittayakumjorn, Imperial College London

  • Shuyang Li, University of California San Diego

  • Vera Liao, IBM Research

  • Manuel Lopez, Universidade de Lisboa

  • Andrea Passerini, University of Trento

  • Subramanian Ramamoorthy, University of Edinburgh

  • Udo Schlegel, University of Konstanz

  • Kacper Sokol, University of Bristol

  • Daniel Sonntag, DFKI

  • Kaushik Subramanian, SonyAI

  • Kush Varshney, IBM Research

  • Yash Goyal, Samsung -- SAIT AI Lab Montreal

  • Sriraam Natarajan, UT Dallas

Organizers

  • Elizabeth Daly (Workshop Chair), IBM Research, Dublin

  • Öznur Alkan, IBM Research, Dublin

  • Stefano Teso, University of Trento

  • Wolfgang Stammer, TU Darmstadt

EXAMPLE REFERENCES (SAMPLER)

  • Simone Stumpf, Vidya Rajaram, Lida Li, Margaret Burnett, Thomas Dietterich, Erin Sullivan, Russell Drummond, and Jonathan Herlocker. "Toward harnessing user feedback for machine learning." IUI. 2007.

  • Maytal Saar-Tsechansky, Prem Melville, and Foster Provost. "Active feature-value acquisition." Management Science. 2009.

  • Burr Settles. "From theories to queries: Active learning in practice." In Active Learning and Experimental Design workshop @ AISTATS'10. 2011.

  • Chen, Li, Pearl Pu. "Critiquing-based recommenders: survey and emerging trends." User Modeling and User-Adapted Interaction. 2012.

  • Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. "Power to the people: The role of humans in interactive machine learning." AI Magazine. 2014.

  • Steve Branson, Grant Van Horn, Catherine Wah, Pietro Perona, and Serge Belongie. "The ignorant led by the blind: A hybrid human–machine vision system for fine-grained categorization." International Journal of Computer Vision. 2014.

  • Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. "Principles of explanatory debugging to personalize interactive machine learning." IUI. 2015.

  • Been Kim, Elena Glassman, Brittney Johnson, and Julie Shah. "iBCM: Interactive Bayesian case model empowering humans via intuitive interaction." 2015.

  • Andreas Holzinger. "Interactive machine learning for health informatics: when do we need the human-in-the-loop?" Brain Informatics. 2016.

  • Manali Sharma, and Mustafa Bilgic. "Evidence-based uncertainty sampling for active learning." Data Mining and Knowledge Discovery. 2017

  • Michael Kelly, Chelsea Sidrane, Katherine Driggs-Campbell, and Mykel J. Kochenderfer. "HG-DAgger: Interactive imitation learning with human experts." ICRA. 2019.

  • Patrick Schramowski, Wolfgang Stammer, Stefano Teso, Anna Brugger, Franziska Herbert, Xiaoting Shao, Hans-Georg Luigs, Anne-Katrin Mahlein, and Kristian Kersting. "Making deep neural networks right for the right scientific reasons by interacting with their explanations." Nature Machine Intelligence. 2020.

  • Gonzalo Ramos, Christopher Meek, Patrice Simard, Jina Suh, and Soroush Ghorashi. "Interactive machine teaching: a human-centered approach to building machine-learned models." Human–Computer Interaction. 2020.

  • Changjian Shui, Fan Zhou, Christian Gagné, and Boyu Wang. "Deep active learning: Unified and principled method for query and training." In AISTAT. 2020.

  • Alina Jade Barnett, Fides Regina Schwartz, Chaofan Tao, Chaofan Chen, Yinhao Ren, Joseph Y. Lo, and Cynthia Rudin. "IAIA-BL: A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography." arXiv:2103.12308. 2021.