MERCADO Workshop @ IEEE VIS 2023
Our half-day workshop "Multimodal Experiences for Remote Communication Around Data Online (MERCADO)" took place on Sunday, October 22 2023 in Melbourne at IEEE VIS 2023 consisted of two 75-minute workshop sessions on either side of a 30-minute coffee break in Room 105.
Note: An archival version of our IEEE VIS workshop proposal can be found here: https://arxiv.org/abs/2303.11825.
Session 1: 9:00 – 10:15
Introduction & Workshop Goals (10 min)
Chair: Matthew Brehmer
Moderator: Maxime Cordeil
Bio: I am a Human Computer Interaction researcher. I specialize in new media production technologies and the opportunities arising from novel interaction technologies, collaborative interaction, mobile and situated computing. I have a passion for exploring the possibilities for new media production tools, processes and social dynamics, particularly given the vast array of innovative interaction technologies and networking infrastructure now available. I bring my first hand experience of working with production technologies, as an event manager and technician, to envision new forms of media consumption, production and delivery.
Title: Interaction as Live Events - How to Leverage Production Value to Design Meaningful Media Engagements.
Abstract: Synchronous video-based communication is an integral part of work and social life. In recent years, events ranging from conferences and teaching to medical consultations and family 'get togethers', have moved largely online. The experience of such events however remains homogeneous, primarily due to the design constraints of the available platforms. By designing these interactions as 'live events', we can leverage well established tropes, genre conventions and understanding to improve these experiences, better communicating with our participants and audiences.
Paper Presentations - "Immersion" (3 x 10 min)
Chair: Takayuki Itoh
Paper 1: Asymmetric Immersive Presentation System for Financial Data Visualization (Gotsacker et al)
Paper 2: Echoes in the Gallery: A Collaborative Immersive Analytics System for Analyzing Audience Reactions in Virtual Reality Exhibitions (Yuan et al)
Paper 3: Hanstreamer: an Open-source Webcam-based Live Data Presentation System (Kristanto et al)
Coffee break and demos (10:15 - 10:45)
Session 2: 10:45 – 12:00
Paper Presentations - "Multimodal and Visual Analytics" (3 x 10 min)
Chair: Maxime Cordeil
Paper 4: CommunityClick-Virtual: Multi-Modal Interactions for Enhancing Participation in Virtual Meetings (Jasim et al)
Paper 5: Combining Voice and Gesture for Presenting Data to Remote Audiences (Srinivasan & Brehmer)
Paper 6: Talking to Data Visualizations: Opportunities and Challenges (Molina León et al)
Panel Discussion: Perspectives on MERCADO (~25 min)
Panelists: Tom Bartindale, Andrea Batch, and Andy Cotgreave
Moderators: Maxime Cordeil and Takayuki Itoh
Next Steps & Workshop Conclusion (~20 min)
Moderator: Christophe Hurter
Accepted Workshop Submissions:
Asymmetric Immersive Presentation System for Financial Data Visualization. Matt Gottsacker, Mengyu Chen, David Saffo, Feiyu Lu, Blair MacIntyre. [PDF]
Abstract: This paper presents the current design of a work-in-progress system for giving engaging, immersive presentations based on financial data in an immersive environment. The system consists of a presenter controlling the presentation flow through a tablet interface and an audience experiencing the presentation content through an augmented reality (AR) head-worn display (HWD). We discuss the user scenario motivating our system design, its associated design considerations, the high level features of the system being built, and how a system such as this can be extended to other presentation contexts.Echoes in the Gallery: A Collaborative Immersive Analytics System for Analyzing Audience Reactions in Virtual Reality Exhibitions. Linping Yuan, Wai Tong, Kentaro Takahira, Zikai Wen, Yalong Yang, Huamin Qu.
Abstract: Virtual reality (VR) has enriched the exhibition experience, yet curators and artists often face challenges in curation due to disagreements in the spatial arrangement of exhibits. Although VR applications permit audiences to express their reactions, these multimodal and semantically rich data remain underutilized in spatial arrangement deliberations. Based on insights from semi-structured interviews with curators and artists, we outline design requirements to enhance collaboration in analyzing visitors' reactions, focusing on traffic flow analysis, angle-induced reactions, what-if analysis, and cross-view annotations. To meet these requirements, we propose a novel collaborative immersive analytic system that visualizes audience flow and individual reactions within VR spaces, facilitating more effective communication and decision-making between artists and curators. Future research will explore advanced visual design techniques to address scalability issues and in-place spatial comparison. We will also finish the development and conduct usability studies.Hanstreamer: an Open-source Webcam-based Live Data Presentation System. Adrian Kristanto, Maxime Cordeil, Benjamin Tag, Nathalie Riche, Tim Dwyer. [PDF]
Abstract: We present Hanstreamer, a free and open-source system for webcam-based data presentation. The system performs real-time gesture recognition on the user’s webcam video stream to provide interactive data visuals. Apart from standard chart and map visuals, Hanstreamer is the first such video data presentation system to support network visualisation and interactive DimpVis-style time-series data exploration. The system is ready for use with popular online meeting software such as Zoom and Microsoft Teams.CommunityClick-Virtual: Multi-Modal Interactions for Enhancing Participation in Virtual Meetings. Mahmood Jasim, Ali Sarvghad, Narges Mahyar. [PDF]
Abstract: Government officials often rely on public engagements to gauge people's perspectives on civic issues and gather feedback to make informed policy decisions. Traditional public engagement methods are often face-to-face, such as town halls, public forums, and workshops. However, during the COVID-19 pandemic, these approaches were rendered ineffective due to health risks and the engagement process saw a shift towards virtual meetings. While accessible to a broader audience, virtual public meetings introduced challenges around limited time and opportunity for attendees to share feedback. Furthermore, attendees were often required to identify themselves, potentially discouraging reticent attendees from speaking up and risking confrontations with other attendees. To mitigate this issue, we designed and developed CommunityClick-Virtual, a multi-modal companion web application that allows virtual meeting participants to provide feedback on meeting discussions silently and anonymously using six customizable options or through chat messages without the need to speak up. The organizers have access to all attendee feedback channels where they can use synchronized coordinated visualizations to gather a more holistic understanding of people's perspectives. The field deployments of CommunityClick-Virtual demonstrated its efficacy in increasing participation and enabling organizers to identify insights that could help them make more informed decisions.
Combining Voice and Gesture for Presenting Data to Remote Audiences. Arjun Srinivasan, Matthew Brehmer. [PDF]
Abstract: We consider the combination of voice commands with touchless bimanual gestures performed during presentations about data delivered via teleconference applications. Our demonstration extends recent work that considers the latter interaction modality in a presentation environment where charts can be composited over live webcam video, charts that dynamically respond to the presenter’s operational (i.e., functional and deictic) hand gestures. In complementing these gestures with voice commands, new functionality is unlocked: the ability to precisely filter, sort, and highlight subsets in the data. While these abilities provide presenters with more flexibility in terms of presentation linearity and the capacity for responding to audience questions, imperative voice commands can come across to audiences as stilted or unnatural, and may be distracting.Talking to Data Visualizations: Opportunities and Challenges. Gabriela Molina León, Petra Isenberg, Andreas Breiter. [PDF]
Abstract: Speech is one of the interaction modalities that we increasingly come across in natural user interfaces. However, its use in collaborative scenarios has not yet been thoroughly investigated. In this reflection statement, we discuss the opportunities and challenges of integrating speech interaction in multimodal solutions for collaborative work with data visualizations. We discuss related findings from other research communities and how we could build upon their work to explore and make use of speech interaction for data visualizations in co-located, hybrid, and remote settings.