About This Workshop
Multimodal interaction offers many potential benefits for data visualization to help people stay in the flow of their visual analysis and presentation. Often, the strengths of one interaction modality can offset weaknesses of another. However, existing visualization tools and interaction techniques have mostly explored a single input modality such as mouse, touch, pen, or more recently, natural language and speech. Recent interest in deploying data visualizations on diverse display hardware including mobile, AR/VR, and large displays create an urgent need to develop natural and fluid interaction techniques that can work in these contexts. Multimodal interaction offers strong promise for such situations, but its unique challenges for data visualization have yet to be deeply investigated.
This workshop will bring together researchers with expertise in visualization, interaction design, and natural user interfaces. We aim to build a community of multimodal visualization researchers, explore synergies and challenges in our research, and establish an agenda for research on multimodal interactions for visualization.
Important Dates
Submissions
We invite 2-4 page position papers (in the CHI Extended Abstracts format, with page limit including references) on any topic related to multimodal interaction for data visualization. Position papers should outline experiences, interests, and challenges around multimodal interaction for visualizations including pen, touch, gesture, speech, and natural language. Topics may include, but are not limited to:
Please send submissions using the EasyChair system (and select the track MultimodalVis 2018).
Organizers
Bongshin Lee, Microsoft Research
Arjun Srinivasan, Georgia Institute of Technology
John Stasko, Georgia Institute of Technology
Melanie Tory, Tableau Software
Vidya Setlur, Tableau Software