Research seminars
Master in Robotics, Graphics and Computer Vision - Universidad de Zaragoza
Master in Robotics, Graphics and Computer Vision - Universidad de Zaragoza
Camera Calibration in Sports
Floriane Magera, Innovation Engineer at EVS Broadcast Equipment. Researcher at Univ. of Liège (Belgium).
December 18th @15h - A07
Abstract: Camera calibration is central to many sports technologies we now take for granted, including player tracking, match statistics, and officiating. In this talk, I’ll share insights from my industrial PhD—where academic research meets the practical challenges of real-world sports production.
Bio: Hi, I’m Floriane Magera, I'm from the French-speaking part of Belgium. I’m currently pursuing a PhD in Computer Vision at the University of Liège, in collaboration with EVS Broadcast Equipment, a leader in live production. My research focuses on enabling augmented reality for sports content, a task that relies heavily on precise camera calibration. So far, my work has been integrated into a VAR (Video Assistant Referee) system for football.
Perceptually Inspired Learning Models for Intuitive Authoring of Material Appearance (PhD Defense)
Julia Guerrero-Viu, Graphics and Imaging Lab, Universidad de Zaragoza, Spain.
February 2nd @15h - Sala de Conferencias I3A - Edificio i+D+I
Bio: I am a PhD Candidate working under the supervision of Prof. Belen Masia and Prof. Diego Gutierrez at Graphics and Imaging Lab, at Universidad de Zaragoza (Spain). Previously, I studied Bachelor in Computer Engineering at Universidad de Zaragoza (Spain) and Master in Computer Science with specialization in Artificial Intelligence at University of Freiburg (Germany). My research interests lie among the fields of computer graphics, perception science, computer vision and deep learning. My PhD thesis focuses on better understanding how our brains visually perceive material appearance in order to build intuitive, perceptually-based representations of appearance for visual content creation. In my free time, I love volunteering in scientific outreach activities and raising visibility of women in STEM. Webiste here.
Abstract: Our visual perception of the world is strongly influenced by how we interpret material appearance. While humans have an innate ability to recognize materials and their properties, such as glossiness or softness, these characteristics emerge from intricate and not fully understood interactions between different factors, including not only surface reflectance, but also external conditions such as illumination, geometry, or point of view. Therefore, understanding and modeling material appearance from a perceptual perspective remains a significant scientific challenge. In computer graphics, multiple material representations exist, which are often guided by the physical interactions between light and matter. However, and despite recent advances, there is still a fundamental gap between such computational models of material appearance and perceptually meaningful, human-friendly properties, limiting our ability to interact with digital imaging tools. This thesis explores how human perception can inform the learning of latent representations that are more aligned with how we see and understand material appearance, as well as how these representations can be used for developing more intuitive and controllable material authoring tools to support user-centered creative tasks. We present the contributions of this thesis from both perspectives: finding perceptually-meaningful material representations, and developing intuitive material authoring tools.
Perception-Based Techniques to Enhance User Experience in Virtual Reality
Colin Groth. Immersive Computing Lab @ NYU, New York, USA.
March, 4th @15h - Online - Streamed at A.07
Bio: Colin Groth is a Postdoctoral Researcher at New York University in the Immersive Computing Lab led by Prof. Qi Sun. He received his MSc and Ph.D. in Computer Science from Technische Universität Braunschweig, Germany in 2020 and 2024 respectively. Currently, he was working as a Postdoc in the Computer Graphics group of Prof. Hans-Peter Seidel at Max Planck Institute for Informatics. His research interests include perceptual-driven techniques in VR, particularly for mitigating cybersickness. With his methods and research, he also likes to encourage cross-disciplinary collaboration and develop applications for real-time rendering, image processing, and augmented reality.
Abstract: Virtual reality promises deeply immersive experiences for entertainment, learning, healthcare, and more. Yet many applications still suffer from limited visual quality and unwanted side effects such as cybersickness, which reduce user acceptance. In this lecture, I present research that improves the VR experience by combining technical innovation with insights from human perception. The work explores how bodily signals can be aligned with visual input, how immersive video can be encoded more efficiently while respecting perceptual constraints, and how subtle visual manipulations can be applied to reduce discomfort without altering the underlying scene. Across these approaches, the central idea is consistent: rather than increasing raw computational power, we can improve immersion and reduce discomfort by understanding and leveraging how humans see and perceive motion.
Publish or Perish, Part 1: Why, When, Where, How much?
Juan D. Tardós. Dept. Informática e Ingeniería de Sistemas, Universidad de Zaragoza
13 th - MARCH @12h - A07
Abstract: For a researcher, publishing his/her results is one of the most important activities. The goal of these seminars is to get a deeper understanding of the academic publishing process. This first talk will address the following topics:
1. Why publish?
2. When publish?
3. Where publish? Journal and conference rankings. Impact Factor.
4. How are researchers evaluated? Quantity, Quality, Impact.
5. Useful tools: ISI web, Google Scholar, PoP, SCImago,...
This presentation will include practical examples of how to use the available tools to solve common questions such as how to find the most relevant journals and conferences, influential papers and "hot topics" in a research area, or how to find quality indicators of our publications and how to report them in a CV or an accreditation application.
Bio: Juan D. Tardós is professor on Systems Engineering and Automatic Control at the University of Zaragoza. His research area is perception and environment understanding in robotics. He is co-author of one book and more than 60 journal and conference papers on these topics. He has served for several conferences and journals, reviewing more than 200 papers. He has handled +80 papers as Associate Editor of the IEEE Transactions on Robotics, IROS and RSS, obtaining reviews and writing recommendations for their publication or rejection. This presentation reflects his own experience and opinions.
How to give a good (research) talk
Diego Gutierrez - Full Professor at Dept. Informática e Ingeniería de Sistemas, Universidad de Zaragoza
10 th - APRIL @ 12h - A07
Bio: I'm a Full Professor in the Computer Science Department at Universidad de Zaragoza, where I lead the Graphics & Imaging Lab of the I3A Institute. I'm also a member of the Vision, Image and Neurodevelopment Group of the IIS Aragon Institute. I'm the recipient of the 2022 Eurographics Outstanding Technical Contributions Award. My research focuses on the areas of rendering (simulation of light transport), computational imaging, perception, and virtual reality. I have been a visiting researcher at MIT, Stanford, Yale and UCSD, among others, and was recently selected as one of the 100 most influential researchers of the decade in his field.
Neural Nanophotonics for Physical AI
Ethan Tseng - CTO of Cephia; prev. PhD student at Princeton University
13 th - APRIL @ 17h - Online (https://meet.google.com/ykf-nwxd-zut)
Abstract: Although optical design is a mature field, the introduction of novel optical devices such as metasurfaces will require a concurrent introduction of new design methods. Coincident with the invention of these new light-shaping tools is the rise of artificial intelligence, specifically deep learning with neural networks. In this talk, I will present my research on differentiable wave propagation and its application to cameras and displays. Specifically, the optical components are treated as differentiable layers, akin to neural network layers, that can be trained jointly with the computational blocks of the imaging/display system. I will show how this framework can be used to design salt-sized metasurface optics, commercial camera optics, and étendue expanding optics for holographic displays.
Bio: Ethan Tseng is a founder and CTO of Cephia, a company that aims to redefine computer vision and computational imaging. He received his PhD from Princeton University where he was advised by Prof. Felix Heide. Ethan’s research was highlighted by Optics & Photonics News in 2021 and in 2024 and has been featured in international media such as BBC, NSF Discovery Files, Newsweek, Nvidia Technical Blog, and Jimmy Fallon’s Tonight Show. Ethan is a recipient of the Google PhD Fellowship. Webpage: https://ethan-tseng.github.io