The workshop is a 2-day event from lunch to lunch.
Day 1
Registration and Lunch 12:15 p.m. - 13:15 p.m.
13.15 p.m.–13.30 p.m.
Jens Sjölund
13.30 p.m.–14.30 p.m.
Simon Olsson, Chalmers University of Technology
Abstract: Generative modeling is a powerful technique learn probability distributions from data. Many properties and quantities of interest in the sciences often expressed as expectation values for a certain probability distribution, most prominently in statistical physics. Focusing on molecular sciences, I will in this talk, detail how we are developing Generative AI to improve molecular dynamics simulations, potentially supporting drug discovery and materials design in the near future.
Coffee Break 14:30 p.m.–15:00 p.m.
15.00 p.m.–16.45 p.m.
Chair: Paul Häusner
Joel Oskarsson: Forecasting the Weather with Graph Neural Networks
Elias Nyholm: General mathematical framework for equivariant neural networks
Pavlo Melnyk: On Learning Deep O(n)-Equivariant Hyperspheres
Lele Cao: Continuous-Time Dynamic Graphs
Sofiane Ennadir: If You Want to Be Robust, Be Wary of Initialization
Philipp Misof: Neural Tangent Kernels for Equivariant Neural Networks
19.30 p.m.–22.00 p.m.
Brasserie 21
Day 2
Registration and coffee 8.30 a.m. - 09.15 a.m.
09.15 a.m.–10.15 a.m.
Kathlén Kohn, KTH Royal Institute of Technology
Abstract: A fixed neural network architecture parametrizes a family of functions, called the neuromanifold of the network. In this talk, we explore how the geometry of the neuromanifold changes when varying some architectural choices, such as depth, width, activation function, or type of layers. We will then explain how the geometric features of the neuromanifold (e.g., dimension, curvature, singularities) affect the loss landscape (i.e., the critical points of network training). This talk focuses on polynomial activation functions to make use of powerful tools from algebraic geometry. The type of layers we consider are dense layers, convolutional layers, and self-attention mechanisms. In particular, our results show the generic identifiability of lightning self-attention networks and polynomial convolutional networks, meaning that there are typically no hidden symmetries in the network parameters. This talk is based on joint works with Joan Bruna, Nathan Henry, Giovanni Marchetti, Guido Montúfar, Vahid Shahverdi, Matthew Trager.
Coffee Break 10:15 a.m.–10:45 a.m.
10.45 a.m.–12.30 a.m.
Chair: Elias Nyholm
Felix Faber: Equivariant matrix function neural networks
Maria Bånkestad: Ising on the Graph
Jesús Pineda: Relational Inductive Bias as a Key to Smarter Deep Learning Microscopy
Georg Bökman: Equivariance for computational efficiency
Nazar Netterström: Unauthorised Session Detection with RNN-LSTM Models and Topological Data Analysis
Filip Ekström Kelvinius: Accelerating Molecular Graph Neural Networks via Knowledge Distillation
Lunch & mingle