Biograhy
Elisabetta Farella is a distinguished researcher leading the Energy Efficient Embedded Digital Architectures (E3DA) unit at the Fondazione Bruno Kessler, Trento, Italy.
Her work focuses on developing energy-independent embedded systems integrated with AI.
Elisabetta Farella actively contributes to various national and international projects, engaging with both the industrial and academic sectors, showcasing her extensive experience and collaborative efforts in the advancement of smart and energy-efficient technologies.
Title: Hardware-Aware Scaling in tinyML: Enabling Optimized Neural Networks for Smart Eyewear and Beyond
Abstract: The rapid evolution of tinyML is unlocking new possibilities in edge computing, particularly in computer vision. This keynote explores a hardware-aware scaling approach to scale and optimize neural networks, enabling efficient performance across various computer vision tasks. The method proposed addresses the challenges of resource constraints in embedded devices, such as smart eyewear, where power and computational efficiency are critical. Through case studies and practical examples, we will demonstrate how this approach facilitates scalable solutions, paving the way for a new generation of wearable and embedded smart systems.
Jakob Engel is a Director of Research at Meta Reality labs, where he is leading egocentric machine perception research as part of Meta’s Project Aria.
He has 10+ years of experience working on SLAM, 3D scene understanding and user/environment interaction tracking, leading both research projects as well as shipping core localization technology into Meta’s MR and VR product lines.
Dr. Engel received his Ph.D. in Computer Science at the Computer Vision Group at the Technical University of Munich in 2016, where he pioneered direct methods for SLAM through DSO and LSD-SLAM.
Title: Spatial AI for Contextual AI
Abstract: The advent of smart wearable devices enables a new source of context for AI that is embedded in egocentric sensor data. This talk will focus on how Spatial AI - 3D understanding of the environment around you, as well as your interaction with it - will enable a new generation of personalized and contextually grounded AI agents. I will talk about how Project Aria and several datasets we have built, as well as exciting new research results from the last year.
Giovanni Maria Farinella is a Full Professor at the Department of Mathematics and Computer Science at the University of Catania, Italy.
His research interests lie in the fields of Computer Vision and Machine Learning with focus on Egocentric Vision.
He is part of the EPIC-KITCHENS and EGO4D teams. He is Associate Editor of the international journals IEEE Transactions on Pattern Analysis and Machine Intelligence, Pattern Recognition, International Journal of Computer Vision. He has been serving as Area Chair for CVPR/ICCV/ECCV/BMVC/WACV/ICPR, and as Program Chair of ECCV.
He founded and currently directs the International Computer Vision Summer School (ICVSS). He was awarded the PAMI Mark Everingham Prize in 2017 and the Intel's 2022 Outstanding Researcher Award.
Title: Procedure Understanding from Egocentric Videos
Abstract: Procedure understanding is crucial to build intelligent agents able to assist users effectively. In this talk, I will present recent results of our lab in the context of procedure understanding from egocentric videos with focus on the detection of human-object interaction exploiting synthetic data, representation of egocentric videos for long-form understanding, and online mistake detection.
Biography
Professor Yoichi Sato serves as professor at the Institute of Industrial Science, The University of Tokyo, where he specializes in the field of computer vision.
His extensive research covers a wide array of topics including first-person vision, gaze sensing and analysis, physics-based vision, and illumination and reflectance modeling.
His works enhance the development of innovative solutions and methodologies in their respective fields, making significant contributions to both academic and practical applications of computer vision. Professor Sato's leadership and expertise continue to inspire and shape the future of technology and its intersection with human interaction.
Title: Understanding Egocentric Visual Attention and Actions
Abstract: For a comprehensive understanding of human behavior, it is essential to know both their actions and the focus of their attention during various activities. In this talk, I will present our efforts in studying human visual attention and actions from first-person videos, shedding light on how individuals interact with their environment from an egocentric perspective.
Biography
Enkelejda Kasneci is a Distinguished Professor (Liesel-Beckmann Distinguished Professorship) at the Technical University of Munich and Director of the TUM Center for Educational Technologies.
Renowned for her expertise in eye-tracking research, she focuses on refining algorithms to improve the accuracy and reliability of eye-tracking systems, particularly in real-world contexts.
Her work enhances human-computer interaction through advancements in gaze and pupil detection technologies.
As a respected academic and mentor, Kasneci continues to influence the next generation of researchers, shaping the future of eye-tracking technology and its applications in improving human-computer interaction
Title: Unseen Cues: Imperceptible Gaze Guidance in Extended Reality
Abstract: Traditional gaze guidance techniques predominantly rely on modifying visual stimuli or adding overt cues to direct attention, which, although effective, can be intrusive or disrupt the user’s experience. In this talk, we present a novel, imperceptible gaze guidance approach in virtual reality (VR) that leverages findings from visual neuroscience to subtly influence the saliency map generated by the primary visual cortex. By triggering natural saccadic reflexes, our method guides the user’s gaze at a neural level, offering a seamless and unobtrusive alternative to existing techniques.