Keynote speakers

Maks Ovsjanikov


Title: Efficient, general-purpose feature learning for 3D shape comparison

Abstract: In this talk I will give an overview of several recent advances in learning features for rigid and non-rigid 3D shapes. My main goals will be two-fold: first, to show how robust and accurate pointwise features on deformable shapes can be learned in an unsupervised manner. Secondly, I will discuss how generalizable feature pre-training can be done on complex scenes and then used in downstream tasks involving completely unseen classes, including highly non-rigid shape analysis. Ultimately, my aim will be to show how general-purpose class-agnostic geometric features can be learned and then used on a wide range of tasks.

Bio:  Maks Ovsjanikov is a Professor at Ecole Polytechnique in France. He works on 3D shape analysis with emphasis on deep learning techniques for shape matching and correspondence. He has received a Eurographics Young Researcher Award, an ERC Starting Grant, a CNRS Bronze Medal (a recognition for junior researchers in France) and an ERC Consolidator Grant in 2023. His works have received 11 best paper awards or nominations at top conferences, including CVPR, ICCV, 3DV, etc. His main research topics include 3D shape comparison and deep learning on 3D data.


Stefanie Wuhrer


Title: Learning representations of 4D human motion

Abstract: This talk presents recent results on data-driven representations and analyses of full human body motion. We consider 4D human body motion where a motion sequence is digitized by a discrete number of frames, each captured densely in 3D space. The first part of the presentation focuses on human motion retargeting, where the goal is to retarget a given input motion to a novel character. We consider two solutions to this problem. The first one deforms each frame of the input motion to obtain the body shape of the new character. We demonstrate that this deformation transfer method generalizes well to unseen poses. The second solution retargets the motion to the new character while taking temporal context into account. This method is correspondence-free and allows for online retargeting. The second part of the presentation focuses on learned representations of 4D human motion. We first consider cyclic hip motion that can be temporally aligned and demonstrate that such motion can be represented effectively in a structured latent space that allows for meaningful interpolations between motion sequences. Our model further learns the correlation between body shape and motion. We then consider more general motion of varying action and duration, and demonstrate that human motion can be represented as sequence of latent primitives. This representation allows for flexible human motion modeling and has applications in spatio-temporal completion tasks from sparse point clouds.

Bio: Stefanie Wuhrer received a Ph.D. in Computer Science in 2009 from Carleton University, Canada. She worked as research associate at the National Research Council of Canada and as junior research group leader at Saarland University and MPI Informatik, Germany. Since 2015, she is a research scientist at Inria Grenoble. Her research interests include 3D geometry and motion processing, shape analysis, and digital humans.


Angela Dai


Title: Learning from Synthetic 3D Priors for Real-World 3D Perception.

Abstract: Understanding the 3D structure of real-world environments is a fundamental challenge in machine perception, with many applications towards robotic navigation and interaction, content creation, and mixed reality scenarios. We thus leverage structural and object priors from large-scale synthetic shape and scene datasets to form a basis for understanding object structures from commodity RGB and RGB-D sensors. Synthetic 3D shapes can serve as an effective 3D prior and basis for object reconstruction from single RGB images, even without exact database matches to input observations. However, the required 3D supervision is expensive to obtain and imperfect - we finally discuss possibilities to learn from weaker supervision signals, along with future challenges in object-based reconstruction and tracking.

Bio: Angela Dai is an Assistant Professor at the Technical University of Munich where she leads the 3D AI group. Prof. Dai's research focuses on understanding how the 3D world around us can be modeled and semantically understood. Previously, she received her PhD in computer science from Stanford in 2018 and her BSE in computer science from Princeton in 2013. Her research has been recognized through a Eurographics Young Researcher Award, Google Research Scholar Award, ZDB Junior Research Group Award, an ACM SIGGRAPH Outstanding Doctoral Dissertation Honorable Mention, as well as a Stanford Graduate Fellowship.