Keynotes

“AI For Good” Isn’t Good Enough: A Call for Human-Centered AI

Speaker: James Landay (Stanford University)

Abstract: AI for Good initiatives recognize the potential impacts of AI systems on humans and societies. However, simply recognizing these impacts is not enough. To be truly Human-Centered, AI development must be user-centered, community-centered, and societally-centered. User- centered design integrates techniques that consider the needs and abilities of end users, while also improving designs through iterative user testing. Community-centered design engages communities in the early stages of design through participatory techniques. Societally-centered design forecasts and mediates potential impacts on a societal level throughout a project. Successful Human-Centered AI requires the early engagement of multidisciplinary teams beyond technologists, including experts in design, the social sciences and humanities, and domains of interest such as medicine or law, as well as community members. In this talk I will elaborate on my argument for an authentic Human-Centered AI.

Designing Easy and Useful Human Feedback 

Speaker: Anca Dragan (UC Berkeley)

Abstract: Machine learning focuses on learning from demonstrations, labels, or comparisons. These are not necessarily the easiest for people to give, nor are they the most informative for the robot. Can we do better?

Beyond RLHF: A Human-Centered Approach to AI Development and Evaluation

Speaker: Meredith Ringel Morris (Google Deepmind)


The role of Reinforcement Learning from Human Feedback (RLHF) in the performance breakthroughs of ChatGPT versus prior systems has turned the ML community’s interest toward people. While RLHF (and other human-powered aspects of modern ML development, including data labeling, red teaming, etc.) is a valuable technique, this focus on an extremely narrow role for people as humans-in-the-loop overlooks the value of holistically considering humans as end-users of AI. Taking an end-user perspective is vital for developing AI models, benchmarks, and products that align with the needs and values of society. In this talk, I’ll give an overview of concepts from HCI that will strengthen AI researchers’ knowledge and skills for responsibly and strategically including human feedback in all aspects of the AI research and development lifecycle, and I’ll share examples of how considering end-user needs has led to innovative AI systems in my own research.



Human-Centered AI Transparency: Lessons Learned and Open Questions in the Age of LLMs

Speaker: Q. Vera Liao (Microsoft Research)


Transparency—enabling appropriate understanding of AI models or systems—is considered a pillar of Responsible AI. The AI research community and industry have developed an abundance of techniques and artifacts in the hope of achieving transparency, including transparent model reporting, evaluation, explainable AI (XAI), and communication of model uncertainty. Meanwhile, the HCI community has taken human-centered approaches to these topics, building on its long-standing interest in design to support user understanding and appropriate mental models. In this talk, I will give an overview of common approaches and lessons learned from HCI research on AI transparency. With the recent rise of LLMs (large language models) and LLM-infused systems, I will also reflect on their unique challenges in providing transparency and discuss open questions.



Detecting and Countering Untrustworthy Artificial Intelligence (AI) 

Speaker: Nikola Banovic (University of Michigan)

The ability to distinguish trustworthy from untrustworthy Artificial Intelligence (AI) is critical for broader societal adoption of AI. Yet, the existing Explainable AI (XAI) methods attempt to persuade end-users that an AI is trustworthy by justifying its decisions. Here, we first show how untrustworthy AI can misuse such explanations to exaggerate its competence under the guise of transparency to deceive end-users—particularly those who are not savvy computer scientists. Then, we present findings from the design and evaluation of two alternative XAI mechanisms that help end-users form their own explanations about trustworthiness of AI. We use our findings to propose an alternative framing of XAI that helps end-users develop AI literacy they require to critically reflect on AI to assess its trustworthiness. We conclude with implications for future AI development and testing, public education and investigative journalism about AI, and end-user advocacy to increase access to AI for a broader audience of end-users.

Human-AI Collaboration for Ambiguities, Uncertainties, and Evolving Objectives 

Speaker: Toby Jia-Jun Li (University of Notre Dame)


Recent advances in AI and ML, particularly large language models, have paved the way for automation and user assistance for numerous complex domains. However, AI support in those areas often grapples with user intent ambiguities, objective uncertainties, and evolving user goals that change with task progression. This talk aims to explore effective strategies to facilitate human-AI collaboration in these challenging contexts. 



Three Lessons Towards Human-Centered Explanations

Speaker: Chenhao Tan (University of Chicago)


Explanations of AI predictions are considered crucial for human-AI interactions. I argue that successful human-AI interactions require two steps: AI explanation and human interpretation. Therefore, effective explanations necessitate the understanding of human interpretation. In this talk, I will summarize our recent work in this direction into three lessons:


Lesson 1: Human intuitions are necessary for effective AI explanations.

Lesson 2: Articulate the interaction between explanations and human intuitions; Collect data to model them.

Lesson 3: One size does not fit all.


I will conclude by discussing future directions towards human-centered explanations.