NII Shonan Meeting #213
The official NII Shonan web page for this meeting is: https://shonan.nii.ac.jp/seminars/213/.
NII Shonan Meeting #213: "Augmented Multimodal Interaction for Synchronous Presentation, Collaboration, and Education with Remote Audiences" will take place at the Shonan Village Center between June 24 - 27, 2024 (Check-in: June 23, 2024 ).
This seminar is a continuation and extension of the themes discussed at the MERCADO workshop at IEEE VIS 2023 (Multimodal Experiences for Remote Communication Around Data Online).
Our schedule follows the NII Shonan meeting schedule template.
Check-in: Sunday, June 23, 2024
15:00: Check-in
19:00: Welcome banquet
21:00: Free time / socializing
Day 1: Monday, June 24, 2024
07:30: Breakfast
09:00: Introduction to NII Shonan meeting (Takayuki)
09:10: Seminar Session 1
9:10 – Introduction to Organizers and Attendees (Matt)
9:30 – Overview of Shonan Meeting 213 Program
9:20 – Overview of Planned Discussion Topics / Goals
9:40 – Recap of MERCADO @ IEEE VIS 2023 Paper Program (Maxime)
9:45 – Recap of MERCADO @ IEEE VIS 2023 Panel (Christophe)
9:50 – Matt Brehmer
9:55 – Maxime Cordeil
10:00 – Christophe Hurter
10:05 – Takayuki Itoh
10:10 – Kiyoshi Kiyokawa
10:15 – Harald Reiterer
11:20 – Ryo Suzuki
11:25 – Jian Zhao
10:30: Break
11:00: Seminar Session 2
11:00 – Mahmood Jasim
11:05 – David Saffo
11:10 – Zhutian Chen
11:15 – Masahiko Itoh
11:20 – Hideaki Kuzuoka
11:25 – Lyn Bartram
11:30: Group photo
12:00: Lunch
13:30: Seminar Session 3
13:30 – Breakout session topic mapping (30 min)
14:00 – Pitching and scheduling breakout groups (45 min)
14:45 – Breakout Session 1 (45 min)
15:30: Break
16:00: Seminar Session 4
16:00 – Introduction to Remote Attendees / Petra Isenberg (remote)
16:05 – Samuel Huron
16:10 – Gabriela Molina León
16:15 – Bektur Ryskeldiev
16:20 – Breakout Session 2 (100 min)
18:00: Dinner
19:30: Free time / socializing
Day 2: Tuesday, June 25, 2024
07:30: Breakfast
09:00: Seminar Session 5
9:00 – Sheelagh Carpendale (remote)
9:05 – Yalong Yang (remote)
9:10 – Anthony Tang
9:15 – Brian Smith
9:20 – Alark Joshi
9:25 – Tim Dwyer
9:40 – Breakout Session 3: Report presentation (50 min)
10:30: Break
11:00: Seminar Session 6
11:00 – Mar Gonzalez Franco + Eric Gonzalez (remote)
11:20 – Breakout groups report back (40 min)
11:20 – Accessibility (champion: Bektur)
11:30 – AI (champion: Tony)
11:40 – Techniques (champion: Tim)
11:50 – Users / Applications: (champion: David)
12:00: Lunch
13:30: Seminar Session 7
13:30 – Arnaud Prouzeau
13:35 – Jonathan Schwabish
13:40 – Yasuyuki Sumi
13:45 – Wolfgang Büschel
13:50 – Bongshin Lee
13:55 – Andrew Cunningham
14:00 – Breakout topic re-assessment + group shuffling (30 min)
14:30 – Breakout Session 4 (60 min)
15:30: Break
16:00: Seminar Session 8
16:00 – Breakout Session 5 (120 min)
18:00: Dinner
19:30: Free time / socializing
Day 3: Wednesday, June 26, 2024
07:30: Breakfast
09:00: Seminar Session 9
9:00 – Breakout Session 6: Report preparation (45 min)
9:45 – Breakout groups report their Grand Challenges (45 min) + voting to rank the challenges
11:00: Break
11:15: Seminar Session 10
11:15 – #unconference session: demos / provocations / tutorials (45 min)
12:00: Lunch
13:30: Excursion
18:15: Main Banquet
21:00: Free time / socializing
Day 4: Thursday, June 27, 2024
07:00: Check-out opens
07:30: Breakfast
09:00: Seminar Session 11
9:00 – Challenge and scope consolidation (45 min)
9:45 – Breakout session 7: develop writing plan (45 min)
10:30: Break
11:00: Seminar Session 12 (closing)
Discussion: Next steps + publishing / collaboration plans (60 min)
12:00: Lunch
13:30: Dismissal
Meeting Participants
organizers:
Matthew Brehmer • Tableau Research (USA)
Maxime Cordeil • Queensland University (Australia)
Christophe Hurter • ENAC / University of Toulouse (France)
Takayuki Itoh • Ochanomizu University (Japan)
confirmed attendees:
Lyn Bartram • Simon Fraser University (Canada)
Wolfgang Büschel • TUD Dresden University of Technology (Germany)
Sheelagh Carpendale • Simon Fraser University (Canada)*
Zhutian Chen • University of Minnesota (USA)
Andrew Cunningham • University of South Australia (Australia)
Tim Dwyer • Monash University (Australia)
Mar Gonzalez Franco • Google (USA)*
Eric Gonzalez • Google (USA)*
Samuel Huron • Institut Polytechnique de Paris (France)
Petra Isenberg • INRIA / Université Paris-Saclay (France)*
Masahiko Itoh • Hokkaido Information University (Japan)
Mahmood Jasim • Louisiana State University (USA)
Alark Joshi • University of San Francisco (USA)
Kiyoshi Kiyokawa • Nara Institute of Science and Technology (Japan)
Hideaki Kuzuoka • University of Tokyo (Japan)
Bongshin Lee • Yonsei University (Korea)
Gabriela Molina León • University of Bremen (Germany)
Arnaud Prouzeau • Inria (France)
Harald Reiterer • University of Konstanz (Germany)
Bektur Ryskeldiev • Mercari R4D (Japan)
David Saffo • JPMorgan Chase & Co. (USA)
Jonathan Schwabish • Urban Institute (USA)
Brian Smith • Columbia University (USA)
Yasuyuki Sumi • Future University Hakodate (Japan)
Ryo Suzuki • University of Calgary (Canada)
Anthony Tang • Singapore Management University (Singapore)
Yalong Yang • Georgia Tech (USA)*
Jian Zhao • University of Waterloo (Canada)
* joining remotely
Planned Discussion Topics
The meeting will focus on emerging technologies and research that can be applied to augmented multimodal interaction for synchronous presentation, collaboration, and education with remote audience. It will also serve as a forum to identify new application scenarios and expand upon existing ones, and as a forum to prototype and test new interaction techniques applicable across these scenarios.
We anticipate that the results of this meeting may include contributions to several academic communities including those affiliated with ACM SIGCHI (CHI, CSCW, UIST) and the IEEE VGTC (VIS, PacificVis, ISMAR, Eurovis). During this seminar, participants will reflect upon the following research questions:
Considering presentation techniques employed by television news broadcasters for presenting technical or data-rich stories (e.g., weather, finance, sports), how can these techniques be applied and expanded upon for augmented multimodal interaction for synchronous presentation, collaboration, and education with remote audience?
Considering video compositing techniques employed by livestreamers (i.e., Twitch, YouTube, Facebook Live) and recorded video content creators (e.g., YouTube, TikTok), how can these techniques be applied during synchronous communication and in conjunction with multimodal interaction (e.g., pose, gesture, voice, etc.)
Considering the techniques by which people represent and interact with multimedia content in immersive XR (VR / AR) how can we extend or adapt these techniques to augmented multimodal interaction for synchronous presentation, collaboration, and education with remote audience?
How can techniques initially designed with expensive / exclusive hardware be adapted to low-cost, accessible hardware contexts?
How could techniques designed for use with depth sensors and pointing devices be adapted to low-cost, accessible hardware contexts?
How can we support multi-party augmented augmented multimodal interaction for synchronous presentation, collaboration, and education with remote audience?
“Sage on the Stage” vs “Guide on the Side” style communication (or, orator facilitator, or similarly, the distinction between formal, linear, and scripted concert recitals and informal, unscripted, and collaborative jam sessions).
Overall, what are the dimensions of the design space for augmented multimodal interaction for synchronous presentation, collaboration, and education with remote audience? Where does existing work fit within this design space and which parts remain underexplored?
Ultimately, our aim is to identify timely and urgent research actionable items that the community gathered during this Shonan meeting can pursue. We aim to report the emerging research directions that we will identify in a top quality outlet research venue such as ACM CHI or IEEE TVCG. In addition, the format of the Shonan meeting will allow us to generate a series of low and high fidelity prototypes that we will share with the broader community on the web. Finally, we anticipate gathering a collection of existing examples of augmented video interaction and compiling these into a browsable online gallery.