Professor for Visual Computing & Artificial Intelligence at Technical University of Munich
Title of Talk: 3D Meshes is All You Need
Abstract: Recent breakthroughs in neural rendering—such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS)—have dramatically advanced our ability to reconstruct and synthesize 3D content from images. However, these approaches often trade off efficiency and direct usability for photorealism, limiting their integration into interactive, real-time applications. In this talk, I will show that 3D meshes, enhanced with the latest generative AI techniques, are reclaiming their central role in the 3D content creation pipeline. For instance, I will present generative methods such as MeshGPT or SceneTex, which leverage transformer-based architectures to generate high-quality, textured 3D meshes directly from data. These methods combine the compactness and editability of traditional mesh representations with the generative power of modern machine learning, enabling scalable and controllable 3D asset creation. I will further explore how these mesh-centric approaches align perfectly with the demands of the metaverse—where interoperability, real-time rendering, and content diversity are key. By making meshes generative, editable, and realistic, we take a significant step toward democratizing 3D content creation for virtual worlds, gaming, and immersive social experiences.
Professor at the University of Luxembourg, Luxembourg
Title of Talk: Controllable 3D Content Creation for the Metaverse: From Scans to Structured Design
Abstract: The metaverse envisions immersive digital environments that depend on the large-scale creation of high-quality 3D content. AI plays a central role in scaling this process, but significant challenges remain in achieving controllability and editability of the generated assets. While AI enables efficient digitization, scanned models remain hard to control. This keynote presents a pathway from scans to design by converting 3D data into parametric CAD models, recovering not only geometry but also the construction history corresponding to the sequential steps a human expert might take within a CAD environment. By bringing together AI, geometric learning and CAD, the talk outlines a new frontier in 3D content creation, where scanned reality becomes a controllable design space, powering the next generation of metaverse experiences.
Professor at TU Darmstadt leading the chair for 3D Graphics & Vision
Title of Talk: Learning Digital Humans in Motion
Abstract: The main theme of my work is to capture and to (re-)synthesize the real world using commodity hardware. It includes the modeling of the human body, tracking, as well as the reconstruction and interaction with the environment. The digitization is needed for various applications in AR/VR as well as in movie (post-)production. Teleconferencing and remote collaborative working in VR is of high interest since it is the next evolution step of how people communicate. A realistic reproduction of appearances and motions is key for such applications. In this presentation, I will talk about how we leverage foundational models for the reconstruction of humans and how we synthesize new motion from text or audio input.