Following Heitman-Radin's work which showed the crystallization of "sticky disk" models form a triangular lattice structure, there have been several developments towards a mathematical study of ground states with crystalline structure for discrete configurations. Modern approaches find robust connections to elasticity.
For example, if the configurations are not far from the reference perfect configurations, it is possible to encode the deformations as the gradients of a locally affine map. The existence of a reference configuration breaks down in the presence of too many defects or if we are in a curved ambient space for which a clear-cut reference configuration is not specified. In these situations one needs to understand geometrically the defects, and introduce notions of discrete curvature adapted to the interactions that we study.
This project consists in developing further the view that discrete configurations can be to a huge extent crystal-like even in curved situations, by extending notions of charts, parallel transport, curvature, and discrete versions of differential geometric invariants. After forming the geometric framework, these notions can then be ad-hoc modified for studying concrete minimization problems and energy behavior of almost crystalline point configurations with many defects.
Don't hesitate to write me if you are interested in these topics.
The problems appearing in this field concern the "throwing at random" of geometric objects such as balls, the concept of a "random closed set" (i.e. what nice probability measures can be defined on closed sets, as opposed to measures on points, on a space?), or the average slicing of bodies by subspaces, curves, surfaces, etc.
The definition of Choquet capacities is also a part of the above field.
In a recent paper of mine with Codina Cotar (see paper A4 in the "papers section") we use a special type of random packings to localize the interaction in multimarginal optimal transport. It turns out that cutting off by a random packing is simply better than cutting off as usual.
This project would consist of learning more about stochastic geometry and applying this knowledge for Multimarginal Optimal Transport and other optimization problems. The theory of capacities also enters in a different part in paper A6 in my "papers" list.
Currently I am working on this theme with Daniel Pons from Universidad Andres Bello.
Don't hesitate to write me if you are interested in these topics.
Starting from Weierstrass' and Malgrange's preparation theorems, up to the Nash-Moser implicit function theorem (see Wikipedia to know what they are!), there has been a strong mathematical tradition to understand generalizations and quantitative versions of implicit function theorems in Geometry, Analysis and Applications.
This is one of the fundamental basic tools for linking properties of functions and manifolds.
I am interested in developing further quantitative versions of the implicit function theorem, which can allow to prove, similar to the analytic functions case, unique continuation principles.
Don't hesitate to write me if you are interested in these topics.
Consider an infinite cylinder of radius R, in which we pack balls of radius 1. As R gows, what is the densest packing of balls in the cylinder? Computer simulations show an interesting pattern, and some special values of R appear. One can think of the densest packing as a "ground state" or "most efficient" configuration. How can we account for the "stored energy", or for the "efficiency deficit", using geometry and topological invariants (such as different notions of combinatorial curvature)? A related question is: why are human hairs curled?
(Numerical simulations and image from this paper of Fu-Steinhardt-Zhao-Socolar-Charbonneau )
Don't hesitate to write me if you are interested in these topics.
How does a bee see the world? How does a dog? How does a mantis shrimp?
The difference between these animals and a human (or between a daltonic person and a non-daltonic one) is that their eyes contain photoreceptors which respond differently to sources of light of different wavelength. Starting from these data, how do we answer the above three questions? The question is really, given an image, to "translate" the same source of light so that to a set of receptors S it gives the same neuronal response as it was giving to a set of receptors S'. before the alteration. This translation problem becomes automatically an interesting and challenging optimization problem, that we can study via new versions of Optimal Transport theory. It also has other cool applications in signal processing, beyond vision and perception.
(image source, and further information for who is curious to learn about human and bees eyes, here )
Don't hesitate to write me if you are interested in these topics.