UbiComp/ISWC 2024 Workshop on Heads-Up Computing
Opportunities and Challenges of the Next Interaction Paradigm with Wearable Intelligent Assistants
Join us at Ubicomp 2024 for an immersive workshop on Heads-Up Computing, where we'll navigate the future of wearable and pervasive technology — all innovators and thought leaders are welcome!
As wearable intelligent devices, such as Apple’s Vision Pro [1], Rayban Meta Glasses [9], and XReal’s Air smartglasses [11], become available to consumers they are inevitably beginning to incorporate them into their daily lives: users have been widely documented (and often derided) wearing them while walking on the street, riding public transit and in a wide range of real world social situations [2]. However, despite the willingness of individuals to experiment with wearing their headsets in diverse contexts, we still know little about designing wearable assistants that can be safely, securely, efficiently and seamlessly integrated into people’s daily activities and routines. Indeed, it seems clear that the successful integration of intelligent assistants into everyday activities will, at a minimum, require maintaining a delicate balance between supporting environmental awareness and enabling timely consumption of digital information.
One emerging framework that seeks to address these concerns is “Heads-Up Computing” [12]. At its core, this vision seeks to provide a seamless, timely, and intelligent digital support mechanism, ensuring that technology accentuates human capabilities rather than subduing them. It represents a focused yet flexible interaction paradigm, anchored on three pillars:
Body-compatible hardware components: devices that conform to and respect human anatomy and ergonomics.
Multimodal voice, gaze, and gesture interaction: leveraging natural human communication modalities.
Resource-aware interaction model: recognizing and adapting to the user’s current cognitive and physical resource allocation.
Although researchers have started investigating this area [3–8,10], there is still much to uncover in order to fully understand the big picture. How can wearable intelligent assistants be designed to provide meaningful support without overwhelming or distracting users? How can they adapt to individual contexts and preferences while maintaining privacy and security? These questions highlight the need for a comprehensive exploration of the opportunities and challenges presented by the next interaction paradigm with wearable intelligent assistants.
This workshop aims to bring together experts and stakeholders in the field to collaboratively examine the unique design considerations and implications of wearable intelligent assistants. By analyzing real-world use cases and exploring cutting-edge research, we can gain valuable insights into the potential of these devices to enhance and transform our daily lives. Additionally, we will address the ethical and societal implications surrounding the wide- spread adoption of wearable intelligent assistants, ensuring that these technologies are developed in a responsible and sustainable manner.
By fostering an inclusive and interdisciplinary dialogue, this workshop seeks to bridge the gap between hardware development and holistic understanding. Through this collective effort, we can shape the future of Heads-Up Computing, not only by addressing the technical challenges but also by considering the human-centered aspects that are critical to the successful integration of wearable intelligent assistants into our lives.
Workshop Organizers
Shengdong Zhao
Professor
City University of Hong Kong
Ian Oakley
Professor
Korea Advanced Institute of Science & Technology
Yun Huang
Associate Professor
The University of Illinois Urbana-Champaign
Haiming Liu
Associate Professor
Southampton University
Can Liu
Associate Professor
City University of Hong Kong
Workshop Agenda
Keynote speaker:
Mark Billinghurst
Mark Billinghurst
Mark Billinghurst is Professor of Human Computer Interaction at the University of South Australia in Adelaide, Australia. He earned a PhD in 2002 from the University of Washington and researches innovative computer interfaces that explore how virtual and real worlds can be merged, publishing over 300 papers in topics such as wearable computing, Augmented Reality and mobile interfaces. Prior to joining the University of South Australia he was Director of the HIT Lab NZ at the University of Canterbury and he has previously worked at British Telecom, Nokia, Google and the MIT Media Laboratory. His MagicBook project, was winner of the 2001 Discover award for best entertainment application, and he received the 2013 IEEE VR Technical Achievement Award for contributions to research and commercialization in Augmented Reality. In 2013 he was selected as a Fellow of the Royal Society of New Zealand.
Date: Oct. 5th, 2024; Venue: Sofitel Melbourne
9:00 AM – 9:10 AM: Opening and Welcome - (Host: Ian Oakley)
9:10 AM – 9:40 AM: Setting the Stage (Host: Shengdong Zhao)
Organizer presentation on the "Heads-Up Computing" paradigm
9:40 AM – 10:30 AM: (Keynote) Mark Billinghurst - Research directions in Heads-up Computing
(Chaired by Shengdong Zhao)
10:30 AM – 11:00 AM: Coffee Break
11:00 AM – 12:30 PM: Participant Research Showcase (Chaired by Haiming Liu)
(Order of presentation may be adjusted)
Kent Lyons “The Road to Ubiquity: Unpacking Barriers to Mass Adoption of Heads-Up Computing".
Nuwan Janaka: Empowering Every Step: Towards Proactive Intelligent Wearable Assistants.
Jiwan Kim - The Impact of Gaze and Hand Gesture Complexity on Gaze-Pinch Interaction Performances
Haiming Liu - DeepVision
Gerinna Wang - A New Way to Create in the Era of Generative AI
Dell Zhang - Empowering Smart Glasses with Large Language Models: Towards Ubiquitous AGI
(Remote) Tang Yiliu - Proposing New Metric for XR
12:30 PM – 2:00 PM: Lunch Break.
2:00 PM – 2:30 PM: Delineating Key Themes (Organizer introduction of topics)
Discussion of major themes
https://docs.google.com/document/d/1-LthMnk-qduybT7IIckUOYsSThuP7lIqz_yYADnCn-s/edit
2:30 PM – 3:30 PM: Group Deliberations (Host: Haiming Liu)
Group discussions on insights and theme intersections
3:30 PM – 4:00 PM: Break
4:00 PM – 4:45 PM: Reconvene and Wrap-up (Host: Ian Oakley)
Final presentations from groups
Open discussion on workshop outcomes
4:45 PM – 5:00 PM: The Way Forward (Host: Shengdong Zhao)
Discussion on future collaborations and research directions
Call for Papers
Background & Introduction:
Heads-Up Computing [12] is an emerging interaction paradigm within Human-Computer Interaction (HCI) that focuses on integrating computing systems into the user's natural environment and daily activities seamlessly. The goal is to deliver information and computing capabilities in an unobtrusive manner that complements ongoing tasks without interfering with users' natural forms from the real-world context. More information of Heads-up computing can be found here.
Although there has been some initial progress in this domain, much more exploration is needed to fully realize its potential. We view this workshop as a valuable opportunity to outline a research roadmap that will direct our future endeavors in this exciting field. This roadmap will help us identify key areas of focus, address current challenges, and explore innovative solutions that enhance user interaction seamlessly.
Topics of Interest:
We look for participants with research background in AR, MR, wearable computing, and/or intelligent assistants. Interested academic participants are asked to submit a 2-4 page position paper or research summary on topics including but not limited to:
Interfaces and Interactions: As smart glasses usher us into a new age, they bring forth the question of designing interactions that are intuitive, seamless, and socially acceptable. How can we meld technology with human instincts?
Mobility/Multitasking: The mobility that smart glasses bring is undeniable. The design nuances of catering to a user on-the-move—be it walking, driving, or merely existing in public spaces—deserves detailed discussion.
Ergonomics and Comfort: Functionality does not necessarily warrant comfort. Balancing capability with user comfort will be a pivotal area of exploration.
Inclusive and trustworthy Information Access: Information empowers people’s lives. With a constant influx of information, users stand at the risk of being overwhelmed. This theme will dissect the impact of information accessibility and how to manage and interact with information without jeopardizing safety.
Privacy and Ethics: In an age where user data is holds high value to various holders, wearable technologies walk a fine line between being informative and invasive. The ethical implications of data collection, storage, and usage will be a prime area of focus.
Abuse and Addiction: Every technological marvel comes with its own set of pitfalls. The potential misuse, both by vendors and individuals, will be scrutinized. Delving into these dark patterns will help us forecast and possibly prevent misuse.
Submission guideline:
To submit your workshop paper for Ubicomp24p, please ensure your documents are formatted as PDF files. You can upload your proposals through the following link: Ubicomp24p Submission Portal.
For Academic Participants, you can submit:
Position Paper: Focus on a specific issue within the realm of Heads-Up Computing.
Research Summary: Provide a comprehensive overview of multiple projects you are involved in.
Once accepted, all position and research summary papers will be compiled into the workshop proceedings and will be accessible on the ArXiv platform.
For Industry Participants:
If you do not have previous publications in this area but wish to attend the workshop, please submit a 1-page cover letter. In your letter, describe your background and outline what you hope to learn and contribute during the workshop.
In addition to this standard format, we ask everyone to submit a simple online form with the following information https://forms.gle/xC2v9x23vXGHX3K78.
Brief introduction to their research area.
Past and ongoing research topics.
What do you want to get out of the workshop?
Perceived major issues with the next interaction paradigm of wearable intelligent assistant.
Insights or solutions you might have in mind.
It is imperative that at least one author of each accepted submission attend the workshop. Furthermore, all participants must register both for the workshop and for a minimum of one day of the main conference. We eagerly await your valuable contributions and insights. Together, let’s shape the future of human-computer interaction.
Important dates:
July 5th, 2024: Submission deadline for Workshop papers (Please note that although the conference website may list June 7th as the deadline for paper submissions, the correct deadline is actually July 5th for the Heads-up computing workshop. Please make sure to submit your papers by this revised date. Thank you for your attention to this update).
July 19, 2024: Notification of Workshop papers by each accepted Workshop
July 26, 2024: Deadline for camera-ready version of workshop paper to include in ACM DL
August 16, 2024: Deadline for camera-ready version of workshop papers to include in ArXiv
October 5-6, 2024: Workshops in Melbourne, Australia
Organizers:
Shengdong Zhao: Professor, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong, China
Ian Oakley: Professor, KAIST, Daejeon, South Korea
Yun Huang: Associate Professor, University of Illinois at Urbana-Champaign, Rono-Hills, Urbana-Champaign, Illinois, USA
Haiming Liu: Associate Professor, University of Southampton, Southampton, UK
Can Liu: Associate Professor, City University of Hong Kong, 18 Tat Chee Avenue, Kowloon, Hong Kong, China
If you have any questions: please contact us on ubicomp24p@precisionconference.com.
References:
[1] Apple Inc. 2023. Apple Vision Pro. https://www.apple.com/apple-vision-pro/. Accessed: 2024-03-29.
[1] Apple Inc. 2023. Apple Vision Pro. https://www.apple.com/apple-vision-pro/. Accessed: 2024-03-29.
[2] Chas Danner. 2024. People Are Doing Some Interesting Things With the Apple Vision Pro. Intelligencer (5 Feb 2024). https://nymag.com/intelligencer/2024/02/videos-images-of-people-using-apple-vision-pro-in-public.html
[3] Augusto Esteves, Yonghwan Shin, and Ian Oakley. 2020. Comparing selection mechanisms for gaze input techniques in head-mounted displays. International Journal of Human-Computer Studies 139 (1 7 2020), 102414.
[4] Debjyoti Ghosh, Pin Sym Foong, Shengdong Zhao, Can Liu, Nuwan Janaka, and Vinitha Erusu. 2020. Eyeditor: Towards on-the-go heads-up text editing using voice and manual input. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[5] Tobias Höllerer and Steven K. Feiner. 2004. Mobile augmented reality. In Telegeoinformatics: Location-Based Computing and Services. Taylor and Francis Books Ltd., London, UK.
[6] Nuwan Janaka, Jie Gao, Lin Zhu, Shengdong Zhao, Lan Lyu, Peisen Xu, Maximilian Nabokow, Silang Wang, and Yanch Ong. 2023. GlassMessaging: Towards Ubiquitous Messaging Using OHMDs.
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, 3 (2023), 1–32.
[7] Feiyu Lu, Shakiba Davari, Lee Lisle, Yuan Li, and Doug A Bowman. 2020. Glanceable ar: Evaluating information access methods for head-worn augmented reality. In 2020 IEEE conference on virtual reality and 3D user interfaces (VR). IEEE, 930–939.
[8] Steve Mann. 2001. Wearable computing: Toward humanistic intelligence. IEEE intelligent systems 16, 3 (2001), 10–15.
[9] Ray-Ban. 2024. Ray-Ban Meta Smart Glasses. https://www.ray-ban.com/usa/ray-ban-meta-smart-glasses. Accessed: 2024-03-29.
[10] Tram Thi Minh Tran, Shane Brown, Oliver Weidlich, Mark Billinghurst, and Callum Parker. 2023. Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE Transactions on Visualization and Computer Graphics (2023).
[11] Xreal Corporation. 2024. Xreal Light. https://www.xreal.com/light/. Accessed: 2024-03-29.
[12] Shengdong Zhao, Felicia Tan, and Katherine Fennedy. 2023. Heads-Up Computing: Moving Beyond the Device-Centered Paradigm. Commun. ACM 66 (9 2023), 56–63
[13] Debjyoti Ghosh, Pin Sym Foong, Shengdong Zhao, Di Chen, Morten Fjeld: EDITalk: Towards Designing Eyes-free Interactions for Mobile Word Processing. CHI 2018: 403
[14] Ashwin Ram, Han Xiao, Shengdong Zhao, Chi-Wing Fu: VidAdapter: Adapting Blackboard-Style Videos for Ubiquitous Viewing. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7(3): 119:1-119:19 (2023)
[15] Nuwan Nanayakkarawasam Peru Kandage Janaka, Shengdong Zhao, Shardul Sapkota: Can Icons Outperform Text? Understanding the Role of Pictograms in OHMD Notifications. CHI 2023: 575:1-575:23
[16] Chen Zhou, Katherine Fennedy, Felicia Fang-Yi Tan, Shengdong Zhao, Yurui Shao: Not All Spacings are Created Equal: The Effect of Text Spacings in On-the-go Reading Using Optical See-Through Head-Mounted Displays. CHI 2023: 720:1-720:19
[17] Ashwin Ram, Shengdong Zhao: LSVP: Towards Effective On-the-go Video Learning Using Optical Head-Mounted Displays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5(1): 30:1-30:27 (2021)
[18] Shardul Sapkota, Ashwin Ram, Shengdong Zhao: Ubiquitous Interactions for Heads-Up Computing: Understanding Users' Preferences for Subtle Interaction Techniques in Everyday Settings. MobileHCI 2021: 36:1-36:15
[19] Yifei Cheng, Yukang Yan, Xin Yi, Yuanchun Shi, David Lindlbauer: SemanticAdapt: Optimization-based Adaptation of Mixed Reality Layouts Leveraging Virtual-Physical Semantic Connections. UIST 2021: 282-297
[20] Yunpeng Bai, Aleksi Ikkala, Antti Oulasvirta, Shengdong Zhao, Lucia J Wang, Pengzhi Yang, Peisen Xu: Heads-Up Multitasker: Simulating Attention Switching On Optical Head-Mounted Displays. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI’ 2024)
[21] Runze Cai, Nuwan Janaka, Yang Chen, Lucia Wang, Shengdong Zhao, Can Liu: PANDALens: Towards AI-Assisted In-Context Writing on OHMD During Travels. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI’ 2024)