Heads-up computing 

To better understand the concept of Heads-Up Computing, let's use a cooking analogy to explore its components:


Imagine you're preparing to cook a meal. The first decision is selecting the right hardware; this could be a wok, a steamer, or a barbecue rack depending on what you're planning to cook. Next, consider the ingredients. If you're a vegetarian, your choices will naturally exclude meat, focusing instead on vegetables and plant-based products. Finally, the cooking method comes into play. Each cuisine, such as French or Chinese Sichuan, has its distinct techniques and methods that define its flavors and outcomes.


So, what are the hardware, ingredients, and strategies in the context of Heads-Up Computing?

1) Hardware: Body-Compatible Hardware Components

Traditional devices like mobile phones often distract users, turning them into so-called "smartphone zombies" because they require concentrated interaction. In contrast, Heads-Up Computing leverages a distributed design that aligns with human capabilities. While numerous hardware design possibilities exist, achieving a balance between compatibility, convenience, practicality, and existing technological constraints is crucial. We anticipate that, at least in the near future (5-10 years), the hardware platform for Heads-Up Computing will primarily consist of two fundamental components: a head-piece and a hand-piece. In the future, we also anticipate a body-piece in the form of a robot that can further enhance the capability of the heads-up hardware platform. 


Head-piece responsibilities:


Hand-piece responsibilities:


While some systems, such as Apple's Vision Pro, integrate the head-piece and hand-piece into a single device, this approach compromises wearability, resulting in a device that is too bulky for everyday use. Consequently, a two-piece solution is more likely to achieve greater portability, thus possible to serve as an everyday device. For example, systems like Eyeditor [4], GlassMessaging [6], and PandaLens [21] utilize a smart-glasses as the head-piece, and a wearable ring mouse as the hand-piece to achieve a balance between functionality and portability.  Note that the hand-piece used in these examples is only a basic one and only achieve partial functionalities for an ideal hand-piece which aims to provide comprehensive tracking and feedback capabilities.


2) Ingredients: Multimodal Voice, Gaze, and Gesture Interaction

For effective interaction during daily activities, Heads-Up Computing utilizes complementary communication channels, as most tasks involve sight and manual activities:


3) Strategies: Static and Dynamic Interface & Interaction Design Approaches

Designing interface and interaction strategies for Heads-Up Computing presents unique challenges, as it requires minimal interference with the user's current activities. This necessitates the use of transparent displays that adapt as the user moves, and the avoidance of traditional input methods like keyboards, mice, and touch interactions, which demand significant attention and resources.


To create heads-up friendly interfaces and interactions, two main approaches can be considered:


a) Static Interface & Interaction Design: Environmentally Aware and Fragmented Attention Friendly
This approach aims to design interfaces that are suited for environments requiring fragmented attention, such as multitasking scenarios. Example research work in this category include:

In addition, tools like VidAdapter are instrumental in adapting existing media to these new interfaces, taking into account both the physical and cognitive availability of the user.


b) Dynamic Interface & Interaction Design: Resource-Aware
Instead of one-size fit all interface solutions, one can also design interfaces that dynamically respond to the user’s current cognitive and physical state. This is what we can "resource-aware interaction" approach which adjusts the system's behavior and generates interfaces and interactions that are context-sensitive, providing a more personalized and efficient user experience. An example of such interface has been proposed by Lindlbauer's group [19]. However, such interfaces require the system to have a stronger understanding of the environment, the users' cognitive status, and the device constraints in real time, which is much harder to do. However, this is a research direction that's worth further investigations, and Heads-up Multitasker [20] is one such attempt that tries to understand users' cognitive model in heads-up computing scenarios. 

Heads-Up Computing signifies a transformative shift towards a human-centric approach, where technology is designed to augment rather than hinder user engagement with the real world. 


References:
[1] Apple Inc. 2023. Apple Vision Pro. https://www.apple.com/apple-vision-pro/. Accessed: 2024-03-29.

[2] Chas Danner. 2024. People Are Doing Some Interesting Things With the Apple Vision Pro. Intelligencer (5 Feb 2024). https://nymag.com/intelligencer/2024/02/videos-images-of-people-using-apple-vision-pro-in-public.html

[3] Augusto Esteves, Yonghwan Shin, and Ian Oakley. 2020. Comparing selection mechanisms for gaze input techniques in head-mounted displays. International Journal of Human-Computer Studies 139 (1 7 2020), 102414.

[4] Debjyoti Ghosh, Pin Sym Foong, Shengdong Zhao, Can Liu, Nuwan Janaka, and Vinitha Erusu. 2020. Eyeditor: Towards on-the-go heads-up text editing using voice and manual input. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.

[5] Tobias Höllerer and Steven K. Feiner. 2004. Mobile augmented reality. In Telegeoinformatics: Location-Based Computing and Services. Taylor and Francis Books Ltd., London, UK.

[6] Nuwan Janaka, Jie Gao, Lin Zhu, Shengdong Zhao, Lan Lyu, Peisen Xu, Maximilian Nabokow, Silang Wang, and Yanch Ong. 2023. GlassMessaging: Towards Ubiquitous Messaging Using OHMDs.

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, 3 (2023), 1–32.

[7] Feiyu Lu, Shakiba Davari, Lee Lisle, Yuan Li, and Doug A Bowman. 2020. Glanceable ar: Evaluating information access methods for head-worn augmented reality. In 2020 IEEE conference on virtual reality and 3D user interfaces (VR). IEEE, 930–939.

[8] Steve Mann. 2001. Wearable computing: Toward humanistic intelligence. IEEE intelligent systems 16, 3 (2001), 10–15.

[9] Ray-Ban. 2024. Ray-Ban Meta Smart Glasses. https://www.ray-ban.com/usa/ray-ban-meta-smart-glasses. Accessed: 2024-03-29.

[10] Tram Thi Minh Tran, Shane Brown, Oliver Weidlich, Mark Billinghurst, and Callum Parker. 2023. Wearable Augmented Reality: Research Trends and Future Directions from Three Major Venues. IEEE Transactions on Visualization and Computer Graphics (2023).

[11] Xreal Corporation. 2024. Xreal Light. https://www.xreal.com/light/. Accessed: 2024-03-29.

[12] Shengdong Zhao, Felicia Tan, and Katherine Fennedy. 2023. Heads-Up Computing: Moving Beyond the Device-Centered Paradigm. Commun. ACM 66 (9 2023), 56–63

[13] Debjyoti Ghosh, Pin Sym Foong, Shengdong Zhao, Di Chen, Morten Fjeld: EDITalk: Towards Designing Eyes-free Interactions for Mobile Word Processing. CHI 2018: 403

[14] Ashwin Ram, Han Xiao, Shengdong Zhao, Chi-Wing Fu: VidAdapter: Adapting Blackboard-Style Videos for Ubiquitous Viewing. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7(3): 119:1-119:19 (2023)

[15] Nuwan Nanayakkarawasam Peru Kandage Janaka, Shengdong Zhao, Shardul Sapkota: Can Icons Outperform Text? Understanding the Role of Pictograms in OHMD Notifications. CHI 2023: 575:1-575:23

[16] Chen Zhou, Katherine Fennedy, Felicia Fang-Yi Tan, Shengdong Zhao, Yurui Shao: Not All Spacings are Created Equal: The Effect of Text Spacings in On-the-go Reading Using Optical See-Through Head-Mounted Displays. CHI 2023: 720:1-720:19

[17] Ashwin Ram, Shengdong Zhao: LSVP: Towards Effective On-the-go Video Learning Using Optical Head-Mounted Displays. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5(1): 30:1-30:27 (2021)

[18] Shardul Sapkota, Ashwin Ram, Shengdong Zhao: Ubiquitous Interactions for Heads-Up Computing: Understanding Users' Preferences for Subtle Interaction Techniques in Everyday Settings. MobileHCI 2021: 36:1-36:15

[19] Yifei Cheng, Yukang Yan, Xin Yi, Yuanchun Shi, David Lindlbauer: SemanticAdapt: Optimization-based Adaptation of Mixed Reality Layouts Leveraging Virtual-Physical Semantic Connections. UIST 2021: 282-297

[20] Yunpeng Bai, Aleksi Ikkala, Antti Oulasvirta, Shengdong Zhao, Lucia J Wang, Pengzhi Yang, Peisen Xu: Heads-Up Multitasker: Simulating Attention Switching On Optical Head-Mounted Displays. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI’ 2024)

[21] Runze Cai, Nuwan Janaka, Yang Chen, Lucia Wang, Shengdong Zhao, Can Liu: PANDALens: Towards AI-Assisted In-Context Writing on OHMD During Travels. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (CHI’ 2024)