Mircea Petrache

What's new:

  • Organizing workshop on "Theoretical and Mathematical aspects of Deep Learning" - 4 August 2022, at UC Chile (hybrid event)

  • Visiting Augusto Gerolin (Ottawa) -- 16 July - 26 July 2022

  • Organizing summer school Point configurations: deformations and rigidity -- London 26 june - 2 July 2022

  • "Classification of uniformly distributed measures of dimension 1 in general codimension" (with P. Laurain) appeared in Asian. J. Math. https://www.intlpress.com/site/pub/pages/journals/items/ajm/content/vols/0025/0004/a006/

  • New preprint "Almost sure recovery in quasi-periodic structures" with Rodolfo Viera https://arxiv.org/abs/2112.11613

  • Fondecyt Regular grant (4 years 2021-2025). Project title: "Rigidity, stability and uniformity for large point configurations"

  • New preprint "Sharp discrete isoperimetric inequalities in periodic graphs via discrete PDE and Semidiscrete Optimal Transport" with Matias Gomez (master student UTFSM) https://arxiv.org/abs/2012.11039

(last updated: July 9, 2022)

Contact: Facultad de Matemáticas, Avda. Vicuña Mackenna 4860, Macul, Santiago, 6904441, Chile

Email: mpetrache@mat.uc.cl Cellphone: +56 9 3686 3545 Office phone: 23544038

Mathematics is there to interact with other sciences.

I'm actively searching new ways to apply it in real-world problems. I will not work on a topic unless I actually believe that the gained knowledge can later be applied outside of mathematics.

I was trained in PDEs, Calculus of Variations, Geometric Analysis and Geometric Measure Theory. Which I now use to study emergent behavior and structures, especially (but not only) for large point configurations and deep learning.


CV

Since January 2018 I am Assistant Professor at PUC Chile.

September 2017-December 2017: visited FIM in Zürich and Vanderbilt University.

2015-17: MIS Max Planck Institute in Leipzig and Max Planck Institute in Bonn. (Funding: European Post-Doc Institute. Mentors: B. Kirchheim, S. Müller)

2013-2015: Postdoc at Laboratoire Jacques-Louis Lions (Funding: Fondation de Sciences Mathèmatiques de Paris. Mentor: Sylvia Serfaty)

2013: Ph.D at ETH Zürich, (Thesis: "Weak bundle structures and a variational approach to Yang-Mills Lagrangians in supercritical dimensions". Advisor: Tristan Rivière).

2008: MSc Scuola Normale Superiore, (Thesis: "Differential Inclusions and the Euler equation". Advisor: Luigi Ambrosio)


Scientific themes I care about (storyline):


  1. Topological singularities and large point configurations (my mathematical upbringing):

  • In geometry and physics, vortices or topological defects are formed and studied via regularity theory.

  • Groups of defects form (in a setup including, but not restricted to "points = topological singularities") interacting point configurations.

  • These configurations often organize themselves in relation with the geometry/shape of the ambient space.

  • How to quantify "regularity" for discrete structures? We may look at distance/Fourier/spectral statistics of the points, and quantify noise levels: we then go from random, to hyperuniform, to regular/crystalline configurations. Persistence/recovery questions of the above statistics, are closely related questions.

  • Algorithmic complexity viewpoint: a limiting factor in working with general point configurations is their exponential complexity (=the space of configurations grows in complexity exponentially in the number of points). And high regularity = low complexity (to describe points forming a regular lattice we need to fix a basis and an origin, while to describe a random point cloud we need to enumerate all the points one by one!)

The nicest models of large point configurations we use, combine elegance & generality, with built-in exponential complexity.

But if we stick with these models, we might miss that the data itself is actually much better behaved !


  1. How to improve our way of thinking about (our classical models of) large groups of points in space, to make better sense real data points?

A powerful principle is to quantify the group behavior of the points, through problem-specific structures, that in turn instruct low-complexity approximations. Examples of what maths to use:

  • Statistical mechanics measures group behavior and correlations within the theory of crystallization and in the study of other phases of matter.

  • In Material science in macroscopic/continuum limits the properties of large numbers of atoms are summarized by (fewer) continuum/multiscale variables.

  • In probability theory, one can start with large deviation principles, concentration of measure bounds.

  • In computer science, we have multi-scale analysis, metric dimension reduction are studied (also cf. compressed sensing).


  1. Similarly to the (green part) above, Deep Neural Networks are experimentally shown to "learn surprisingly well" real-life datasets, but we have no good theory.

What are we missing in order to understand this success? Some threads I'm following at the moment:

  • Quantifying the information flow through Neural Networks, especially models which are easily testable in "real data", in order to understand the Lottery Ticket Hypothesis.

  • How Neural Networks process geometric/topological features of the input datasets (topics including, but not restricted to, Topological Data Analysis and Persistence Theory).

  • Geometry Informed Neural Networks : why keep building neural networks based on imagining data in R^d or in L^2 norm? I'm thinking especially about ways to use Optimal Transport for DNN theory, and how to use Gromov Hyperbolicity to implement better learning algorithms.

  • Memory mechanisms in Neural Networks and in the brain: what can Neuromorphic Neural Networks teach us about memory? What's their mathematical theory going to look like? This is related to understanding in a principled way the role of recurrence and of time-dependencies, in deep learning.


Links to some past events (for personal reference):