Call for Participation

About This Workshop

Multimodal interaction offers many potential benefits for data visualization to help people stay in the flow of their visual analysis and presentation. Often, the strengths of one interaction modality can offset weaknesses of another. However, existing visualization tools and interaction techniques have mostly explored a single input modality such as mouse, touch, pen, or more recently, natural language and speech. Recent interest in deploying data visualizations on diverse display hardware including mobile, AR/VR, and large displays create an urgent need to develop natural and fluid interaction techniques that can work in these contexts. Multimodal interaction offers strong promise for such situations, but its unique challenges for data visualization have yet to be deeply investigated.

This workshop will bring together researchers with expertise in visualization, interaction design, and natural user interfaces. We aim to build a community of multimodal visualization researchers, explore synergies and challenges in our research, and establish an agenda for research on multimodal interactions for visualization.

AVI 2018 Conference


Important Dates

  • March 16, 2018: Position paper submission deadline
  • March 23, 2018: Notification of acceptance
  • May 11, 2018: Final position papers due
  • May 29, 2018: Workshop


Submissions

We invite 2-4 page position papers (in the CHI Extended Abstracts format, with page limit including references) on any topic related to multimodal interaction for data visualization. Position papers should outline experiences, interests, and challenges around multimodal interaction for visualizations including pen, touch, gesture, speech, and natural language. Topics may include, but are not limited to:

  • Visualization on various displays beyond the desktop, including mobile, large screen, and AR/VR.
  • Visualization designs to leverage specific interaction modalities including natural language interaction (text and voice), pen, touch, mouse.
  • Libraries and toolkits to support specific interaction modalities
  • Evaluation methods
  • Use cases and motivating scenarios of multimodal interaction for data visualization

Please send submissions using the EasyChair system (and select the track MultimodalVis 2018).


Organizers

Bongshin Lee, Microsoft Research

Arjun Srinivasan, Georgia Institute of Technology

John Stasko, Georgia Institute of Technology

Melanie Tory, Tableau Software

Vidya Setlur, Tableau Software