SHAI 2023: Workshop on Designing for Safety in Human-AI Interactions
IUI 2023 Workshop
Hybrid, Full Day ( ~ March, 2023)
Workshop Goals
The main research question of SHAI — Safety in Human-AI Interactions — is: How can we make the outcomes of ML models, especially generative models, safer when humans engage with these models? To achieve this, the workshop is expected to bring multidisciplinary industry and academic partners. We will share multiple challenges in this space, and identify potential opportunities for collaboration to address one or more of the above highlighted challenges, and discover those unlisted above. This would require researchers at the intersection of AI and HCI to learn from each other and arrive at consensus on what gaps lie in our collective understanding of human and ML model functioning.
Workshop Schedule & Format
The workshop will be a half-day mini-conference in a hybrid format, allowing in-person and virtual participation. In-person participants would be able to fully interact with other participants while virtual participants can still participate in the majority of activities. To help other audiences to experience the workshop asynchronously, we will record the workshop presentation upon the agreement from the authors. We will provide related materials, such as conference videos, position papers, and discussion outcomes on our website. The tentative workshop schedule can be found on the schedule page.
Call for Papers
Our call will provide the three main themes of workshop publications as follows:
Position Paper: Proposing an innovative idea that has a relevancy with a safe AI-driven design that has not been fully tested.
Opinion Paper: Sharing the opinion regarding how future research and practice should be directed based on best practices or theoretical reasoning.
Late-breaking Work: Describing an early stage of design with tentative findings relevant to a safe AI.
Participants will submit their contributions in the form of 2 - 6 pages short papers, and demos, according to the IUI paper and demo guidelines in line with the above themes and previously highlighted challenges. For more details, please visit the CfP page
Important Dates
Workshop Papers submission due: Rolling deadline - Jan 15th 2023 - March 1st 2023
Notification of acceptance due: Rolling (max 1 week after submission)
Workshop date: March 27, 2023
Organizers
Tesh 'Nitesh' Goyal
Tesh Goyal (he/him) leads research on tools designed to build AI responsibly at Google Research. His work has focused on AI for social good for marginalized populations, including tools for journalists/ activists to manage harassment, reducing biases during investigative sensemaking, unpacking the role of data annotators’ identity on ML outcomes and more.
Sungsoo Ray Hong
Sungsoo Ray Hong (he/him) is an Assistant Professor in the Department of Information Sciences and Technology at George Mason University. His research mission is Alignable AI, aiming at establishing empirical understanding and designing novel tools to make AI aligned to humans’ expectations, norms, and mental model.
Regan L. Mandryk
Regan L. Mandryk (she/her) is a Canada Research Chair in Digital Gaming Technologies and Experiences and Professor of Computer Science at the University of Saskatchewan. Her work focuses on how people use playful technologies for social and emotional wellbeing, and how toxicity thwarts the connection and recovery benefits provided by multiplayer games.
Toby Jia-Jun Li
Toby Jia-Jun Li (he/him) is an Assistant Professor in Computer Science and Engineering at the University of Notre Dame. Toby designs, builds, and studies interactive systems that facilitate effective human-AI collaboration in various task domains. Several focus areas of his work include human-centered data science, human-AI co creation in creative tools, human-AI collaboration in programming, and worker empowerment against AI inequality in gig work.
Kurt Luther
Kurt Luther (he/him) is an associate professor of computer science and (by courtesy) history at Virginia Tech. His research group, the Crowd Intelligence Lab, builds and studies systems that combine the complementary strengths of crowdsourced human intelligence and AI to support ethical, effective investigations.
Dakuo Wang
Dakuo Wang (he/him) is a Senior Research Staff Member and leads the human-centered natural language interaction strategy at IBM Research. He specializes in designing and developing human centered AI systems for real-world user needs and has published more than 50 papers and 50 patents on related topics.
Website designed by Zheng Ning. Banner images generated by DALL-E 2 with the prompt: "an oil painting on human-ai interaction"