Human Centered AI
workshop at NeurIPS 2021

Monday 13 December 2021, online

Human-Centered AI (HCAI) is an emerging discipline that aims to create AI systems that amplify and augment human abilities and preserve human control in order to make AI partnerships more productive, enjoyable, and fair. Our workshop aims to bring together researchers and practitioners from the NeurIPS and HCI communities and others with convergent interests in HCAI. With an emphasis on diversity and discussion, we will explore research questions that stem from the increasingly wide-spread usage of machine learning algorithms across all areas of society, with a specific focus on understanding both technical and design requirements for HCAI systems, as well as how to evaluate the efficacy and effects of HCAI systems.

Keynote Speakers

  • Cecilia Aragon, University of Washington, US. Dr. Aragon founded and directs the Human Centered Data Science Lab at University of Washington. Her research focuses on enabling humans to gain insights from large datasets through a combination of machine learning and qualitative, quantitative, and visualization analyses. Dr. Aragon’s book, Human Centered Data Science, will be published by MIT Press in 2022.

  • Barbara Poblete, University of Chile; Millenium Institute on Data, Chile; Amazon. Dr. Poblete co-directs the "Fake News and Misinformation" multidisciplinary research group at the Millenium Institute on Data. Her research areas are Social Network Analysis, Web Data Mining, Crisis Informatics and Applied Machine Learning. Her work "Information Credibility on Twitter" was awarded the 2021 Seoul Test of Time Award by the IW3C2 at The Web Conference.

  • Wendy Mackay, Inria; Université Paris-Saclay, France. Dr. Mackay directs the ExSitu research group in HCI at Inria and Université Paris-Saclay. Through study of users who push the limits of interaction and their use patterns regarding complex phenomena, Dr. Mackay explores the future of interactive technologies for creative professionals, with a particular focus on human-AI interaction and collaboration. She is an ACM Fellow and the 2021-22 Computer Science Chair for the Collège de France.

  • Cynthia Rudin, Duke University, US. Dr. Rudin’s research focuses on machine learning tools that help humans make better decisions, mainly interpretable machine learning and interpretable deep learning with domain-based constraints. She applies these methods to critical societal problems in criminology, healthcare, and energy grid reliability, as well as to computer vision.


Submissions to the workshop may address one or more of the following themes - or other relevant

themes of interest:

  • Theoretical frameworks, disciplines and disciplinarity. How we approach AI and data science depends on the "lenses" that we bring, based in theory and in practice. Through what perspectives do you approach this complex domain?

  • Experiences and cases with AI systems. Theories suggest studies and experience reports. Studies and experience reports inform theories. What cases or experiences of human-AI interactions can you contribute to our inter-disciplinary knowledge and discussion?

  • Design frameworks for human initiative and AI initiative. Scholars have debated the question of who should have initiative or control between human and AI for over 70 years. What forms of discrete or shared initiative are possible now, and how can we include these possibilities in our systems?

  • Experiences and cases with human-AI collaboration. Design frameworks can inform applications. Experiences with applications can challenge frameworks, or lead to new frameworks. What cases or experiences of human-AI collaborations can you contribute to our inter-disciplinary knowledge and discussion?

  • Fairness and bias. Machine learning-based decision-making systems have the potential to replicate or even exacerbate social inequeties and discrimination. As a result, there is a surge of recent work on developing machine learning algorithms with fairness constraints or guarantees. However, for these tools to have positive real-world impact, their design and implementation should be informed by a clear understanding of human behavior and real needs. What is the interplay between algorithmic fairness and HCI?

  • Privacy. In many important machine learning tasks – e.g. those related to healthcare – there is much to be gained from training on personal information, but we must take care to respect individuals’ privacy appropriately. In this workshop, we are particularly interested in understanding specific use cases and considering costs and benefits to individuals and society of making use of private data.

  • Transparency, explainability, interpretability, and trust. We are interested to understand what specific types of explainability or interpretability are helpful to whom in concrete settings, and in exploring any tradeoffs which are inevitably faced.

  • User research. What do we need to know in order to create or enhance an AI-based system? Our engineering heritage suggests that we seek user needs and resolve user pain points. How does our user research for these concepts change with AI systems? Are there other user research goals that are now possible with more sophisticated AI resources and implementations?

  • Accountability. When people engineer (or create) an AI system and its data, how do we hold them and ourselves accountable for design decisions and outcomes?

  • Automation of AI. It is tempting to apply AI to AI, in the form of automated AI. Is this a credible approach? Does human discernment play a role in creating AI systems? Is this a necessary role?

  • Evaluation. What are the appropriate measurement concepts and resulting metrics to assess our AI systems? How do we balance among efficiency, explainability, understandability, user satisfaction, and user hedonics?

  • Governance. Consequential machine learning systems impact the lives of millions of people in areas such as criminal justice, healthcare, education, credit scoring or hiring. Key concepts in the governance of such systems include algorithmic discrimination, transparency, veracity, explainability and the preservation of privacy. What is the role of HCI in relation to the governance of such systems?

  • Problematizing data. Data initially seem to be simple and ”objective.” However, a growing body of evidence shows the often-hidden role of humans in shaping the data in AI. Should we design our systems to strengthen human engagement with data? or to reduce human impact on data?

  • Qualitative data in data science. Quantitative data analyses may be powerful, but often decontextualized and potentially shallow. Qualitative data analyses may be insightful, but often limited to a narrow sample. How can we combine the strengths of these two approaches?

  • Values and ethics of AI. Values and ethics are necessarily entangled with localized, situated, and culturally-informed human perspectives. What are useful frameworks for a comparative analysis of values and ethics in AI?


(Links to papers and posters are provided with authors' permission. In some cases, authors requested to list title and authors only.)

Keynote 1:: Cynthia Rudin

Panel 1:: XAI Explainable AI

  • Federico Cabitza, Andrea Campagner,
    University of Milano-Bicocca.
    From Human Centered to Interactionist Artificial Intelligence [authors+title only]

  • Upol Ehsan & Mark O. Riedl,
    Georgia Tech.

Explainability Pitfalls: Beyond Dark Patterns in Explainable AI [authors+title only]

  • Sruthi Viswanathan,
    Naver Labs Europe.
    Beware of the Ostrich Policy: End-Users’ Perceptions Towards Data Transparency and Control [link]

  • Claudio Santos Pinhanez,
    IBM Research.
    Expose Uncertainty, Instill Distrust, Avoid Explanations: Towards Ethical Guidelines for AI [link]

Keynote 2:: Barbara Poblete

Panel 2:: Methods

  • Nishtha Namdeo Vaidya, Pierre-Alexandre Murena, Samuel Kaski.
    Indian Institute of Technology Madra, Aalto University, Manchester University.
    Human-AI Collaboration for Experimental Design [authors+title only]

  • Hariharan Subramonyam, Colleen Seifert, Eytan Adar,
    Stanford University, University of Michigan.
    How Can Human-Centered Design Shape Data-Centric AI? [link]

  • Fernando Delgado, Stephen Yang, Michael Madaio, Qian Yang,
    Cornell University, Microsoft Research.
    Stakeholder Participation in AI: Beyond “Add Diverse Stakeholders and Stir” [link]

  • Johannes Schleith,
    Thomson Reuters Labs.
    Human-centered Evaluation of Dynamic Content [link]

Keynote 3:: Wendy Mackay

Panel 3:: Human(s) and AI(s)

  • Kate Donahue, Alexandra Chouldechova, and Krishnaram Kenthapadi.
    Cornell University, Carnegie Mellon University, Amazon.
    Modeling Complementarity in Human-AI Collaboration [authors+title only]

  • Catalina Gomez, Mathias Unberath, Chien-Ming Huang,
    Johns Hopkins University.
    Knowledge Imbalance in AI-Assisted Decision-Making: Collaborating with Non-experts [link]

  • Sam Hepenstal. Dong-Han Ham, Leishi Zhang, B. L. William Wong.
    Defence Science Technology Laboratory, Chonnam National Laboratory, Canterbury Christ Church University, Middlesex University.

    Developing Human-Centered Artificial Intelligence through cognitive engineering

  • Rezzani A., Menendez Blanco M., De Angeli A.,
    Free University of Bozen-Bolsano.
    Exploring the Dark Side of Human-AI Interaction [link]

Keynote 4:: Cecilia Aragon

Panel 4:: Ethics

  • Miguel Sicart, Irina Shklovski, Mirabelle Jones,
    IT University of Copenhagen, University of Copenhagen.
    Can Machine Learning be Moral? [authors+title only]

  • Joel Chan, Hal Daumé III, John P. Dickerson, Hernisa Kacorri, and Ben Shneiderman,
    University of Maryland.
    Supporting human flourishing by ensuring human involvement in AI systems [link]

  • Ashis Kumer Biswas, Geeta Verma, Justin Otto Barber,
    University of Colorado Denver, Radiology Partners.
    Improving Ethical Outcomes with Machine-in-the-Loop: Broadening Human Understanding of Data Annotations [link]

  • Keziah.Naggita, J. Caesar Aguma,
    Toyota Technological Institute at Chicago, University of California Irvine.
    The Equity Framework [authors+title only]

Panel 5:: Fairness (no accompanying keynote)

  • Angel Hsing-Chi Hwang,
    Cornell University.
    Individuality in Human-Centered AI [link]

  • Michael Madaio, Lisa Egede, Hariharan Subramonyam, Jennifer Wortman Vaughan, Hanna Wallach,
    Microsoft Research, Carnegie Mellon University, Stanford University.
    Assessing Fairness in Practice: AI Teams’ Processes Challenges, and Needs for Support [link]

  • Andre Fu, Elisa Ding, Mahdi S. Hosseini, Konstantinos N. Plataniotis,
    University of Toronto, University of New Brunswick.

P4AI: Approaching AI Ethics through Principlism [link]

  • Wesley Hanwen Deng, Manish Nagireddy, Kenneth Holstein, Steven Wu, Haiyi Zhu,
    Carnegie Mellon University.
    Fairness in Practice: Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits [authors+title only]



  • Mary Anne Smart,
    University of California San Diego.
    Addressing Privacy Threats from Machine Learning [link]

  • Aviral Chharia , Shivu Chauhan , Rahul Upadhyay, Vinay Kumar,
    Thapar Institute of Engineering and Technology.
    From Convolutions towards Spikes: The Environmental Metric that the Community currently Misses [link]

Human(s) and AI(s)

  • Pragati Verma, Sudeeksha Murari,
    Amazon Alexa.
    Interpreting Voice Assistant Interaction Quality From Unprompted User Feedback [link]

  • Jack Cook,
    New York Times R&D.
    Switchboard: Automated News Q&A With an Editor in the Loop [link]

  • Debajyoti Datta, Maria Phillips, James P Bywater. Jennifer Chiu, Ginger S. Watson, Laura E. Barnes, Donald E. Brown.
    University of Virginia, James Madison University.

    Improving mathematical questioning in teacher training

  • Marine Carpuat, Ge Gao,
    University of Maryland.

    Human-Centered AI: The Case of Machine Translation for Cross-Lingual Teamwork [link]

  • Juan Sebastián Gómez-Cañón, Perfecto Herrera, Estefanía Cano, Emilia Gómez,
    Universitat Pompeu Fabra, Songquito UG, European Commission.
    Personalized musically induced emotions of not-so-popular Colombian music [link]

  • Mohammad Hossein Jarrahi, Mohammad Haeri; Vahid Davoudi,
    University of North Carolina and The University of Kansas Medical Center
    The Key to an Effective AI-Powered Digital Pathology Establishing a Symbiotic Workflow between Pathologists and Machine [authors+title only]


  • Zihan Wang, Jialin Lu, Oliver Snow, Martin Ester,
    Simon Fraser University.
    An Interactive Visualization Tool for Understanding Active Learning [link]

  • Gaia Pavoni, Massimiliano Corsini, Federico Ponchio, Alessandro Muntoni, Paolo Cignoni,
    Taglab: An human-centric AI system for interactive semantic segmentation [link]

  • Amama Mahmood, Gopika Ajaykumar, Chienming Huang.
    Johns Hopkins University.
    How Mock Model Training Enhances User Perceptions of AI Systems [link]

  • Park Sinchaisri, Hamsa Bastani, Osbert Bastani,
    University of California Berkeley.
    Improving Human Decision-Making with Machine Learning [link]

  • Heloisa Candello,
    IBM Research.
    Bringing “conscious” access to micro-credit by enhancing non-traditional financial practices with AI in the Global South [link]

  • Orestis Papakyriakopoulos, Elizabeth Anne Watkins, Amy Winecoff, Klaudia Jazwinska, Tithi Chattopadhyay,
    Princeton University.
    Qualitative Analysis for Human Centered AI [link]

  • Ananya Nandy, Kosa Goucher-Lambert,
    University of California Berkeley.
    Considerations for Collaborative Human-AI Decision-Making in Engineering Design [authors+title only]

  • Evelyn Zuniga, Stephanie Milani, Mikhail Jacob, Katja Hofmann,
    Microsoft Research, Carnegie Mellon University, Resolution Games.
    Understanding Human-like Behavior in Video Game Navigation [authors+title only]

XAI Explainable AI

  • ClaireWoodcock, Brent Mittelstadt, Dan Busbridge, Grant Blank,
    Oxford Internet Institute.
    Dr Bots: The impact of explanation types on layperson trust in AI-driven symptom checkers [authors+title only]

  • H. Jiang,
    Georgia Tech.
    Case Study on Two XAI Cultures: Non-technical Explanations in Deployed AI System [link]

  • Marko Tešic,
    University of London.
    On the transferability of insights from the psychology explanation to explainable AI [link]

  • Yoonseo Cho, Eun Jeong Kang, Juho Kim,
    How Does Netflix “Understand” Me?: Exploring End-user Needs to Design Human-centered Explanations [link]

Workshop Proposal [link]

Program Committee

Very grateful thanks to:

  • Shazia Afzal, IBM Research

  • Mayank Agarwal, IBM Research

  • Zahra Ashktorab, IBM Research

  • Michelle Brachman, IBM Research

  • Heloisa Caroline de Souza Pereira Candello, IBM Research

  • Munmun De Choudhury, Georgia Tech

  • Michael Desmond, IBM Research

  • Rahul Divekar, Educational Testing Service

  • Upol Ehsan, Georgia Tech

  • Melanie Feinberg, University of North Carolina

  • Katy Ilonka Gero, Columbia University

  • Werner Geyer, IBM Research

  • Emilia Gómez, Joint Research Centre, European Commission

  • Juan Sebastián Gómez Cañón, Universitat Pompeu Fabra

  • Xiaowei Gu, University of Kent

  • Michal Jacovi, IBM Research

  • Narendra Nath Joshi, IBM Research

  • Mary Beth Kery, Apple Computer

  • Q. Vera Liao, Microsoft Research

  • Samir Passi, Microsoft

  • David Piorkowski, IBM Research

  • Rogerio Abreu de Paula, IBM Research

  • Claudio Santos Pinhanez, IBM Research

  • John Richards, IBM Research

  • José Luis Rosselló Sanz, Universitat de les Illes Balears

  • Steven Ross, IBM Research

  • Chris Russell, Turing Institute

  • Yara Rizk, IBM Research

  • Hendrik Strobelt, IBM Research

  • Kartik Talamadupula, IBM Research

  • Dakuo Wang, IBM Research

  • Longqi Yang, Microsoft Research

  • Mikhail Yurochin, IBM Research


  • Michael Muller, IBM Research, Cambridge MA USA on unceded lands of Wampanoag and Massachusett Nations

  • Plamen Agelov, Lancaster University, Lancaster England UK

  • Shion Guha, University of Toronto, Toronto Ontario Canada

  • Marina Kogan, University of Utah, Salt Lake City UT USA

  • Gina Neff, Oxford Internet Institute, Oxford England UK

  • Nuria Oliver, Data-Pop Alliance and Vodafone Institute, New York NY USA

  • Manuel Gomez Rodriguez, Max Planck Institute, Kaiserslautern Germany

  • Adrian Weller, University of Cambridge and Alan Turing Institute, London England UK

Inquiries and updates: michael_muller@us.ibm.com