Scope of the Workshop
Background
The wearable devices are often equipped with several ambient sensors, such as IMU sensors which record linear and rotational movements (via accelerometer, gyroscope, etc.) or EMG sensors which can be used to measure more local and micro movements. These new computing devices open opportunities for new types of interactions, allowing for more proactive and contextual assistance of users, etc.
Given their low power consumption, these ambient motion sensors can be a key modality for powering various on-device models (e.g. exercise / activity recognition for health applications) that require understanding of device wearer's movement patterns. However, due to the lack of a large-scale resource for training, the study on ambient motion sensor modeling has been limited thus far.
Call for Papers
We invite the researchers from industry and academia to study the unique properties of sensor signals with various real-world applications. We are particularly interested in the approaches that combine various multimodal signals (e.g. vision, language), and that focus on practical real-world applicability (e.g. privacy-aware & power-efficient on-device models).
Towards this goal, the Ambient AI workshop accepts both short (2 pages) and long paper (4 pages) submissions on topics including but not limited:
Wearable sensor signals understanding (IMU, EMG, EEG, eye gaze, …)
NLP applications in multimodal wearable sensor signals understanding
Large-scale pre-training of sensor models via multimodal contrastive learning
Multimodal training, modeling, and fusion methods
Privacy-aware machine learning & federated learning for ambient sensor signals
Efficient and scalable training of sensor models at the edge
On-device power-efficient modeling for sensor signals
Egocentric computer vision applications in wearable sensor signals understanding
Question and answering systems with ambient sensor signals
Human and wearable device interaction
Privacy and ethical concerns with wearable sensor AI
Resources & Related Work
Please refer to the following papers, datasets & resources publicly available:
Moon S, Madotto A, Lin Z, Dirafzoon A, Saraf A, Bearman A, Damavandi B., "IMU2CLIP: Multimodal Contrastive Learning for IMU Motion Sensors from Egocentric Videos and Text", 2022 (Website: dataloader for IMU-based datasets & encoder models).
Grauman, Kristen, et al. "Ego4d: Around the world in 3,000 hours of egocentric video", CVPR 2022. (Website)
Karakas et al., "Aria Data Tools", 2022. (Website)
Damen et al., "Rescaling Egocentric Vision: Collection, Pipeline and Challenges for EPIC-KITCHENS-100", International Journal of Computer Vision 2022. (Website)