Mircea Petrache
What's new:
Accepted in NeurIPS2023: "Approximation-Generalization Trade-offs under (Approximate) Group Equivariance" (with S. Trivedi) arxiv.org/pdf/2305.17592.pdf
Accepted in NeurIPS2023: "Three iterations of (d − 1)-WL test distinguish non isometric clouds of d-dimensional points" (with P. Barcelo, V. Delle Rose, A. Kozachinskyi, C. Rojas) arxiv.org/2303.12853.pdf
Preprint "Optimal Quantization with Branched Optimal Transport distances" (with P. Pegon) arxiv.org/pdf/2309.08677.pdf
Preprint "Effective recovery of Fourier spectra and spectral approximation by finite groups" (with R. Viera) arxiv.org/pdf/2308.07446.pdf
Doctoral course on "Data Geometry and Deep Learning" at Rome "La Sapienza" university Nov-Dec 2022
Organized workshop on "Theoretical and Mathematical aspects of Deep Learning" - 4 August 2022, at UC Chile (hybrid event)
Fondecyt Regular grant (4 years 2021-2025). Project title: "Rigidity, stability and uniformity for large point configurations"
(last updated: Mar 24, 2023)
Contact: Facultad de Matemáticas & Instituto de Ingeneria Matematica y Computacional, Avda. Vicuña Mackenna 4860, Macul, Santiago, 6904441, Chile
Email: mpetrache@mat.uc.cl Cellphone: +56 9 3686 3545 Office phone: 23544038
Mathematics is there to interact with other sciences.
I'm actively searching new ways to apply it in real-world problems. I will not work on a topic unless I actually believe that the gained knowledge can later be applied outside of mathematics.
I was trained in PDEs, Calculus of Variations, Geometric Analysis and Geometric Measure Theory. Which I now use to study emergent behavior and structures, especially (but not only) for large point configurations and deep learning.
CV
Since January 2018 I am Assistant Professor at PUC Chile.
September 2017-December 2017: visited FIM in Zürich and Vanderbilt University.
2015-17: MIS Max Planck Institute in Leipzig and Max Planck Institute in Bonn. (Funding: European Post-Doc Institute. Mentors: B. Kirchheim, S. Müller)
2013-2015: Postdoc at Laboratoire Jacques-Louis Lions (Funding: Fondation de Sciences Mathèmatiques de Paris. Mentor: Sylvia Serfaty)
2013: Ph.D at ETH Zürich, (Thesis: "Weak bundle structures and a variational approach to Yang-Mills Lagrangians in supercritical dimensions". Advisor: Tristan Rivière).
2008: MSc Scuola Normale Superiore, (Thesis: "Differential Inclusions and the Euler equation". Advisor: Luigi Ambrosio)
Scientific themes I care about (storyline):
- Topological singularities and large point configurations (my mathematical upbringing):
In geometry and physics, vortices or topological defects are formed and studied via regularity theory.
Groups of defects form (in a setup including, but not restricted to "points = topological singularities") interacting point configurations.
These configurations often organize themselves in relation with the geometry/shape of the ambient space.
How to quantify "regularity" for discrete structures? We may look at distance/Fourier/spectral statistics of the points, and quantify noise levels: we then go from random, to hyperuniform, to regular/crystalline configurations. Persistence/recovery questions of the above statistics, are closely related questions.
Algorithmic complexity viewpoint: a limiting factor in working with general point configurations is their exponential complexity (=the space of configurations grows in complexity exponentially in the number of points). And high regularity = low complexity (to describe points forming a regular lattice we need to fix a basis and an origin, while to describe a random point cloud we need to enumerate all the points one by one!)
The nicest models of large point configurations we use, combine elegance & generality, with built-in exponential complexity.
But if we stick with these models, we might miss that the data itself is actually much better behaved !
- How to improve our way of thinking about (our classical models of) large groups of points in space, to make better sense real data points?
A powerful principle is to quantify the group behavior of the points, through problem-specific structures, that in turn instruct low-complexity approximations. Examples of what maths to use:
Statistical mechanics measures group behavior and correlations within the theory of crystallization and in the study of other phases of matter.
In Material science in macroscopic/continuum limits the properties of large numbers of atoms are summarized by (fewer) continuum/multiscale variables.
In probability theory, one can start with large deviation principles, concentration of measure bounds.
In computer science, we have multi-scale analysis, metric dimension reduction are studied (also cf. compressed sensing).
- Similarly to the (green part) above, Deep Neural Networks are experimentally shown to "learn surprisingly well" real-life datasets, but we have no good theory.
What are we missing in order to understand this success? Some threads I'm following at the moment:
Quantifying the information flow through Neural Networks, especially models which are easily testable in "real data", in order to understand the Lottery Ticket Hypothesis.
How Neural Networks process geometric/topological features of the input datasets (topics including, but not restricted to, Topological Data Analysis and Persistence Theory).
Geometry Informed Neural Networks : why keep building neural networks based on imagining data in R^d or in L^2 norm? I'm thinking especially about ways to use Optimal Transport for DNN theory, and how to use Gromov Hyperbolicity to implement better learning algorithms.
Memory mechanisms in Neural Networks and in the brain: what can Neuromorphic Neural Networks teach us about memory? What's their mathematical theory going to look like? This is related to understanding in a principled way the role of recurrence and of time-dependencies, in deep learning.
Links to some past events (for personal reference):
ICIAM conference, Valencia, Spain, July 15-19, 2019, -- talks in mini-symposia "Discrepancy and Minimal Energy" and "Molecular simulation: quantum mechanical models" (photo of myself with L. Betermin at https://iciam2019.org/images/site/FotosCongreso/CommonAreas/Session(4).jpg)
BIRS workshop January 27th - February 1st, 2019 on Optimal Transport Methods in Density Functional Theory.
Workshop that I co-organized, on November 30th, 2018, London: Symmetries, asymptotics and multi-scale approaches
AIM workshop Discrete geometry and automorphic forms, San Jose, California, September 24-28, 2018
Workshop on Geometric Measure Theory in Verona, June 11-15, 2018
Workshop on Nonlocal interactions: Dislocations and beyond, Bath, 11-14 June 2018
ICERM trimester on Optimal and Random Point Configurations, Brown University, Providence, February 26th - March 2nd, 2018
Workshop that I co-organized, on February 15-17, 2017, London: New trends in Mathematical Physics at the interface of Analysis and Probability