7TH INTERNATIONAL WORKSHOP ON
OBSERVING AND UNDERSTANDING HANDS IN ACTION
Welcome to join our ICCV 2023 Workshop!
The Workshop on Observing and Understanding Hands in Action (HANDS) will gather vision researchers working on perceiving hands performing actions, including 2D & 3D hand detection, segmentation, pose/shape estimation, tracking, etc. The seventh edition of this workshop (HANDS@ICCV2023) will emphasize hand pose estimation from the egocentric view and hands performing fine-grained actions and interactions with tools and objects.
Development of RGB-D sensors and camera miniaturization (wearable cameras, smart phones, ubiquitous computing) have opened the door to a whole new range of technologies and applications which require detecting hands and recognizing hand poses in a variety of scenarios, including AR/VR, assistive systems, robot grasping, and health care. However, the tasks of hand pose estimation from an egocentric camera and/or in the presence of heavy occlusion are still challenging under the status quo.
Compared to static camera settings, recognizing hands in egocentric images is a more difficult problem due to viewpoint bias, camera distortion (e.g., fisheye), and motion blur from the head movement. Additionally, addressing the occlusion during hand-object or hand-hand interactions is an important open challenge that still attracts significant attention for real-world applications. We will also cover related applications, including gesture recognition, hand-object manipulation analysis, hand activity understanding, and interactive interfaces. The relevant topics include:
2D/3D hand pose estimation
Hand shape estimation
Hand-object/hand interaction
Hand detection/segmentation
Gesture recognition/interfaces
3D hand tracking and motion capture
Hand modeling and rendering
Egocentric vision
Hand activity understanding
Robot grasping and object manipulation
Hand image capture and camera systems
Efficient hand annotation methods and devices
Algorithm, theory, and network architecture
Efficient learning methods with limited labels
Generalization and adaptation to unseen users and environments
Applications in AR/VR, Robotics, and Haptics
Workshop sponsored by: