Groupe de travail IA
Bienvenue sur la page du groupe de travail en apprentissage profond du Laboratoire de Mathématiques de Lens.
Le groupe de travail se réunit le mardi à 13h30 en salle P-108.
09/12/25 - Mini-colloque à l'occasion de la soutenance de thèse de Benoît Brebion
Nicolas Boutry (EPITA) : Relating the contour tree to the tree of shapes
Abstract: Some works have shown these last years that topological data analysis (TDA) and mathematical morphology (MM) are strongly related. Indeed, it has been proven that dynamics, often used to determine seeds to compute watersheds, and the persistence, often used to filter Morse-Smale complexes in TDA, are finally equivalent. It has also been shown that Morse functions, very used in TDA and generally associated with a gradient vector field, are equivalent up to the sign to simplicial stacks used in MM. This allowed us to prove that a combinatorial optimization problem known as the Minimum Spanning Forest (MSF) on the dual graph of a simplicial stack is equivalent to compute the gradient vector field of the corresponding Morse function. Here, we present a new result: the contour tree, also known as Reeb graphs (on simply connected domains) in TDA, is equivalent to the tree of shapes coming from MM (up to a dual cell complex computation). This shows that applications using techniques from TDA could be used in MM and conversely. This new step reinforces the link between TDA and MM.
Sergei Grudinin (CNRS) : tba
Jérôme Bosche (UPJV) : tba
25/11/25 - Benoît Brebion (LML) : "Répétition générale soutenance de thèse."
Résumé: Présentation des IA génératives (modèles de diffusion, GANs, etc...) suivie d'un focus sur trois axes développés autour de cette thématique durant ma thèse : 1. Une application à une problématique de traduction de signaux médicaux (EEG chez le nouveau-né prématuré - MEG fœtale) 2. Une exploration plus théorique autour du transport optimal 3. Une étude sur l'impact des méthodes numériques sur la metric de Negative Log-Likelihood dans les modèles de diffusion
10/11/25 - Kais Hariz (LML): "KAN : Réseaux de Kolmogorov-Arnold" (d'après Ziming Liu et al.) Attention! Horaire inhabituel! Lundi à 9:45!
Abstract: Inspired by the Kolmogorov-Arnold representation theorem, Kolmogorov-Arnold Networks (KANs) are promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. Seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs.
10/11/25 - Benoît Brebion (LML) : "Intelligence artificielle générative, recherche et applications." Attention! Horaire inhabituel! Lundi à 9:30!
Résumé: Présentation des IA génératives (modèles de diffusion, GANs, etc...) suivie d'un focus sur trois axes développés autour de cette thématique durant ma thèse : 1. Une application à une problématique de traduction de signaux médicaux (EEG chez le nouveau-né prématuré - MEG fœtale) 2. Une exploration plus théorique autour du transport optimal 3. Une étude sur l'impact des méthodes numériques sur la metric de Negative Log-Likelihood dans les modèles de diffusion
21/10/25 - Nicola Carissimi (Université de Lille) : Le cube de Rubik via LLM et GRPO
Origine du GRPO : arXiv:2402.03300
14/10/25 - Alexander Chervov (Intitut Curie) : "CayleyPy - Artificial intelligence methods for group and graph theories"
Abstract: We will present an AI-based open source Python library "CayleyPy" which can handle googol-size Cayley graphs and significantly outperforms classical computer algebra systems GAP/SAGE for several tasks. Hundreds conjectures and several results were obtained with its help. Classical group theory tasks e.g. decomposition of the group elements, can be rephrased as standard reinforcement learning tasks, and approached in a similar manner as Google Deepmind's AlphaGo/Zero. We will also give an overview of various recent achievements in "AI for math" emerging field of research.
The talk will be based on: arXiv:2509.19162, arXiv:2502.18663, arXiv:2502.13266.
07/10/25 - Ryuichiro Hataya (Kyoto University) : "Investigating Transformers' Optimization and Adaptation Mechanisms"
Abstract: Transformers are the essential backbone of today's AI systems. This talk investigates the theoretical mechanisms behind their success in training and adaptation. It will cover their optimization dynamics, domain adaptation, and in-context learning capabilities.
30/09/25 - Ivo Dell'Ambrogio (LML) : Nano GPT 4 // Germain Poloudny (LML) : sujet de thèse
23/09/25 - Ivo Dell'Ambrogio (LML) : Nano GPT 3
16/09/25 - Ivo Dell'Ambrogio (LML) : Nano GPT 2
09/09/25 - Ivo Dell'Ambrogio (LML) : Nano GPT 1
Vision et discussion de la vidéo d'Andrej Karpathy "Let's build GPT: from scratch, in code, spelled out":
https://www.youtube.com/watch?v=kCc8FmEb1nY
Ressources :