Call for Papers

Interaction with digital systems traditionally exhibit an explicitly noticeable character: The system informs the user about the current state upfront, such as a dialogue box on a visual display, and the user responds with a decision manifested in evident mechanical action, such as a mouse movement and click. There are strong reasons for system designers to revisit and rethink the idea of the classic interactive structure between system and users, as indicated by the significant interest in alternative modes of interaction in recent years from within the HCI community, including Peripheral Interaction (Bakker, 2013), Lived Informatics (Rookbsy et al., 2014), Mindless Computing (Adams et al., 2015), Ambient Displays (Hausen, 2014); Subtle Gaze Direction (Bailey et al., 2009), Implicit HCI (Schmidt, 2000) as well as from the AI communities such as Mixed-Initiative Interaction (Horvitz, 1999; Yannakakis et al., 2014), Human-centered ML (Fiebrink and Gillies, 2018), humans in the loop interactive AI approaches, etc. Furthermore, there is a joint understanding from within the HCI community (e.g. “Intelligibility”, Belotti and Edwards, 2001) and the AI community that there are transparency challenges associated with deep learning systems. Acknowledging this complexity, researchers within HCI advocate a shift towards designing for a lived experience rather than focusing mainly on user behaviour and instrumental goals. This workshop observes:

  1. Interaction with intelligent digital systems has become so pervasive in everyday life that much of it does not involve giving it a second thought, thanks to the human mind’s tendency to automate routine tasks (habituation).

  2. New ways of communicating information between system and user, in both directions, emerging from recent developments in AI, data science, and sensor or actuation technology invisibly move decision-making tasks from humans towards machines.

  3. Evidence shows positive results from taking a collaborative approach to interaction between humans and AI in, for instance, complex creative tasks, if compared to non-hybrid approaches.

Human decision-making will increasingly be invisibly influenced by pervasive AI through various forms of existing and new kinds of HCI, often without users being aware of the AI system ‘under the hood’. The growing field of explainable AI involves the transparency, interpretability, explainability, and control of AI algorithms, although more focus is needed on how to make AI transparently under the control of the end user, while balancing the engagement level of the user experience (Zhu et al., 2018). The human-centered perspective fostered within HCI could contribute here. Also, HCI provides methodologies that can boost AI system evaluation in terms of these constructs. The workshop will also explore how system designers are designing human-AI collaboration today, and what our ideas are for tomorrow in order to make this collaboration more transparent, interpretable, and explainable for the end user.

Key topics of this workshop include but are not limited to:

  • Transparent, interpretable and explainable AI systems – establishing user awareness when desirable

  • Ethics and privacy issues with invisible pervasive AI systems

  • Designing for lived experiences with invisible and pervasive AI systems

  • Designer-centered and/or mixed-initiative co-creativity systems

  • Co-evolution of the user and AI system interaction; intelligent and adaptive UIs

  • Humans-in-the-loop systems with invisible and pervasive AI

  • Machine learning algorithms for hybrid decision-making with a focus on end users

  • Attention-aware systems based on eye tracking, human sensing technologies, etc.

  • Models for unconscious and conscious HCI beyond implicit/explicit input and output

  • Perceptual and cognitive methods for subtle cueing and priming users such as subtle gaze direction, language-based priming and interaction; persuasive technologies

  • Spatiotemporal properties of emerging AI-HCI systems (wearable and context-aware systems theoretically opens for split-second hybrid decision-making everywhere)

References here.

Important Dates:

  • Deadline for paper submissions: August 21, 2020 September 04, 2020 (23:59pm AoE)

  • Notification for accepted papers: September 21, 2020 September 11, 2020

  • Camera ready (for internal distribution amongst workshoppers): September 25, 2020

  • Workshop date: October 25 or 26, in conjunction with NordiCHI 2020

Submissions:

Submit your two to four-page ACM single-column paper according to the NordiCHI 2020 formatting guidelines (https://nordichi2020.org/instructions-authors#/) in one of these categories:

  1. Short papers describe focused empirical studies, work in progress, or extensions of work previously reported in other venues.

  2. Position papers provide an innovative discussion or describe the authors’ visions on topics of interest for AI-driven HCI systems, or address ethical questions at the crossroads between AI or HCI.

  3. Demo papers introduce and describe a system prototype or a novel system extension. Authors are welcome to include or point to captioned demo videos, visualizations or animations, web-based demos, code repositories or executables, or other suitable materials to demonstrate their contribution.

We welcome captioned videos, binary files, or other materials accompanying submissions to demonstrate the contribution when necessary. All submission are double blind. Papers are submitted using the EasyChair submission system here.

On Acceptance:

At the workshop, the organizers and the participants will together decide whether contributing papers will be included in revised form in a proposal to a publisher for a journal special issue or a book, or remain as open online proceedings on the workshop website.