3rd Digital Futures Young Scientist Conference
........
........
With great excitement we can announce that Jasper van Kuijk (Karlstad University and Delft University of Technology), Johannes Heiny (KTH) and Noémie Jaquier (KTH) are the keynote speakers of the conference. What is more, Urban Forssell (KTH and DF) is hosting an inspiring panel session with Frank Jiang (KTH and FleetMQ), Joanna Hård (Amun AI) and Ian Hoffecker (KTH and SciLifeLab).
8:15 - 9:00 Registration
9:00 - 9:15 Opening
9:15 - 10:15 Keynote I 'Digitalization: solution or problem?' by Jasper van Kuijk
Jasper van Kuijk is Assistant Professor of design for responsible digital transformation at Karlstad University, Sweden. He studies how digital information systems can be designed to be useful for individuals, beneficial for society, and effective for the organizations that operate them. He also engages in public outreach on design research through columns, books, and a podcast.
Title: Digitalization: solution or problem?
Abstract: Round the clock access to information and services - yet exclusion of the less digitally savvy and erosion of human contact. Is the increasing digitalization of society delivering more advantages than disadvantages? And as we develop ever more digital solutions, are we solving problems, or creating new ones?
10:15 - 11:45 Poster sessions I and II together with fika
11:45 - 13:15 Lunch
13:15 - 14:15 Keynote II 'The Curse and Blessing of Dimensionality: A Random Matrix Perspective' by Johannes Heiny
Johannes Heiny was recently appointed as Associate Professor with specialization in mathematical statistics at the Department of Mathematics, KTH Royal Institute of Technology. Before joining KTH, he worked as Associate Professor at Stockholm University, and he held postdoc positions at Ruhr University Bochum and Aarhus University. Johannes Heiny obtained a PhD degree in probability and statistics from the University of Copenhagen in 2017. His research interests lie at the intersection of machine learning, random matrix theory and high-dimensional statistics. Utilizing various probabilistic and statistical tools, for example from extreme value theory, his mission is to develop robust and universal mathematical methods to advance the foundations of AI and the explainability of modern machine learning algorithms. His research on the analysis of the dependence structure of large data sets has been featured in top-tier journals such as Annals of Statistics, Annals of Applied Probability and Bernoulli. Supported by the Swedish Research Council and the Verg-Foundation, his group focuses on studying phase transitions and universality phenomena of large random matrices.
Title: The Curse and Blessing of Dimensionality: A Random Matrix Perspective
Abstract: The dramatic increase and improvement of computing power and data collection devices have triggered the necessity to study and interpret the sometimes overwhelming amounts of data in an efficient and tractable way. Random matrix theory (RMT) has emerged as a powerful framework for analyzing high-dimensional data across a wide range of modern applications. As datasets grow in size and complexity, classical statistical assumptions often break down, while RMT provides asymptotic laws and spectral insights that remain stable in the high-dimensional regime. These tools enable robust estimation, anomaly detection, dimensionality reduction, and the characterization of noise versus signal in complex systems. By modeling data-dependent matrices—such as covariance, correlation, kernel, and adjacency matrices—RMT offers principled approaches for understanding their eigenvalue distributions and fluctuations. Consequently, RMT has become indispensable in fields including machine learning, finance, network science, wireless communications, and genomics, where large-scale structure and uncertainty must be navigated effectively.
In this talk, I will introduce the Marchenko–Pastur and semicircle laws, which describe the limiting eigenvalue distributions of large random matrices. After discussing the challenges posed by covariance estimation in high-dimensional settings, I will turn to the classical problem of testing independence. In this context, we will see how self-normalization can stabilize the eigenvalue distribution of large random matrices, leading to more robust statistical procedures.
14:15 - 15:15 Panel "Driving Impact Through Innovation" by:
Urban Forssell (host) holds a Ph.D. in Automatic Control from Linköping University. He has co-founded the research spinoff company NIRA Dynamics and has the last 25 years held senior leadership and board positions in several international tech companies. He currently works as Head of AI Strategy and Innovation at KTH Royal Institute of Technology.
Frank Jiang holds a Ph.D. in Automatic Control from KTH Royal Institute of Technology. His research focused on human centric control design and formal verification for safe, connected vehicles. He is the co-founder and CEO of FleetMQ, a research spinoff building a middleware platform for vehicle fleets. He also works at KTH Innovation.
Joanna Hård holds a Ph.D. in Medicine from Karolinska Institutet and has done postdocs in computational biology at ETH, Switzerland, and KTH Royal Institute of Technology. Joanna is CEO and co-founder of Amun AI.
Ian Hoffecker has a Ph.D. in DNA nanotechnology from Kyoto University, Japan. He leads the Molecular Programming Group at KTH’s Gene Technology Department, with a lab located in SciLifeLab’s Solna Campus. His research is focused on developing new molecular information processing technologies based on sequencing, and he is also involved in ongoing spinout initiatives based on technologies developed in his lab.
15:15 - 15:30 Fika
15:30 - 16:00 Poster session III
16:00 - 17:00 Keynote III 'Traveling the Robot Learning Manifold: A Tale of Geometries and Inductive Biases' by Noémie Jaquier
Noémie Jaquier is an assistant professor at the KTH Royal Institute of Technology, where she heads the Geometric Robot Learning (GeoRob) Lab at the Division of Robotics, Perception and Learning. She received her PhD degree from the Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland in 2020. Prior to joining KTH, she was a postdoctoral researcher in the High Performance Humanoid Technologies Lab (H²T) at the Karlsruhe Institute of Technology (KIT) and a visiting postdoctoral scholar at the Stanford Robotics Lab. Her research investigates data-efficient and theoretically-sound learning algorithms that leverage differential geometry- and physics-based inductive bias to endow robots with close-to-human learning and adaptation capabilities.
Title: Traveling the Robot Learning Manifold: A Tale of Geometries and Inductive Biases
Abstract: Robot motions are fundamentally governed by non-Euclidean geometries. Their state spaces are non-linear manifolds, various variables exhibit distinct geometric characteristics, and collected data often resides in curved spaces. Despite that many problems naturally lend themselves to geometric interpretations, these underlying structures are often relegated to the background. In the modern era of data-driven robotics, this omission creates a critical gap, as many learning algorithms operate on representations that inadvertently ignore or distort the natural geometries of robotics. In this talk, I will discuss how differential geometry — arising from data structure, physics, and prior knowledge — provides a rigorous framework to construct representations and learning algorithms that respect and exploit these natural geometries. I will demonstrate that the performance of diverse algorithms is significantly enhanced by considering the intrinsic geometric characteristics of data, show that complex dynamic properties are more accurately learned and accurately controlled within physics-based geometric configuration spaces, and illustrate that imposing structured geometry on latent spaces allows for richer representations.
17:00 - ... Closing, drinks and mingle