Call for Participation

Call for Participation

Generative ML Models have an unprecedented opportunity to produce non-safe outcomes and harms at a large volume that can be incredibly challenging during Human AI Interactions. Despite best intentions, inadvertent outcomes might accrue leading to harms, especially to marginalized groups in society. On the other hand, those motivated and skilled at causing harm might be able to perpetuate even deeper harms. Our workshop is aimed at practitioners and academic researchers at the intersection of AI and HCI who are interested in understanding these socio-technical challenges, and identify opportunities to address them collaboratively.


We invite researchers from all disciplinary backgrounds to participate in the workshop. Since our workshop’s emphasis is to find a way to effectively bind academic research into real practice, we atre targetting participants from academia and industry in a balanced manner, similar to how the workshop organizers are balanced across industry and academia.


Topic

How can we make the outcomes of ML models, especially generative models, safer when humans engage with these models ?

Existing open questions:

  • Computational advances in pursuing research beyond model accuracy by focusing on catastrophic consequences to humans

  • Creative opportunities for design to manage the user experience and journeys of humans who are likely to be targeted at scale

  • Practical challenges of tracking human and algorithmic harmful / unsafe operations at scale

  • Balancing model accuracy and safety-related metrics which pose a technical dilemma for product-oriented practitioners and ethicists

  • Improving socio-technical understanding of human behavior that motivates non-positive outcomes

  • Creating and learning from theoretical models and frameworks that define non-positice behaviors

  • Sociological observations of impact of human and algorithmic behaviors on society

  • Balancing safety with constructive conflict

  • And more


Submission Format

Participants are invited to submit one of the following in the form of short papers or demos (2 - 6 pages):

  • Position Paper: Proposing an innovative idea that has a relevancy with a safe AI-driven design that has not been fully tested.

  • Opinion Paper: Sharing the opinion regarding how future research and practice should be directing based on best practices or theoretical reasoning.

  • Late-breaking Work: Describing an early stage of design with tentative findings relevant to a safe AI.

Page length: 2 - 6 pages | Maximum word Length: 10000 words

Templates: Single Column Word Submission Template | Single Column Latex Template using \documentclass[manuscript,review,anonymous]{acmart} for the LaTeX template.

Submission form: https://forms.gle/CbVVtD2x6XQeP2877

Deadline: Jan 29, 2023

We aim to get back with acceptance info within a week of submission. Accepted submissions will be expected to represented either in person or virtually at the workshop by at least one author. Please submit early to enable faster visa processing!

Website designed by Zheng Ning. Banner image generated by DALL-E 2 with the prompt: "an abstract painting of an academic workshop at a conference"