Here is a list of topics that I have been interested in. I briefly summarized the content of some of my papers in these areas. When I mention my publications I cite their number as appearing in the Publications page.
A set is called k-rectifiable if it can be covered by countably many k-dimensional Lipschitz graphs, up to a set of zero k-dimensional Hausdorff measure. These sets can be pretty wild but they are among the nicest you can encounter in geometric measure theory. One of my interests concerns (criteria for) rectifiability of sets in Euclidean spaces.
One of my first papers [3], together with Kennedy Obinna Idu, was about some criteria for C1,α rectifiability (the analogue of rectifiability where one covers with graphs of class C1,α). There, we prove an analogue of the following classical result: a set is rectifiable if and only if it admits at almost every point an approximate tangent cone. More precisely, we show that for C1,α rectifiability one can replace the approximate tangent cone with an approximate tangent paraboloid. Moreover, one could also replace the paraboloid with suitable neighborhoods of k-planes that are allowed to also depend on the scale. For an idea of the results and the proof, see the related poster.
With Andrea Merlo, in [6] we have shown that measures that satisfy a suitable endpoint Fourier restriction estimate must be purely 1-unrectifiable (i.e., they are concentrated on a set that meets every Lipschitz curve in a set of zero length). In other words, such measures always show a "fractal" behavior.
In [4] I showed that the jump set of a function in Rn is always (n-1)-rectifiable (without assuming bounded variation or other regularity properties on the function).
A set is rectifiable if and only if it admits at almost every point x an approximate tangent cone.
A set is C1,α-rectifiable if and only if it admits at almost every point x an approximate tangent paraboloid [3].
Currents represent a notion of rough surface, and are widely used as a setting to prove existence of minimal surfaces, such as in Plateau's problem. Indeed, the direct method in the calculus of variation applies (thanks to the lower semicontinuity of mass and the Federer-Fleming compactness theorem). One should keep in mind the following defining principle:
Distributions are to smooth test functions as k-currents are to smooth differential k-forms.
One should also keep in mind that many currents one usually encounters have finite mass. In particular:
Currents with finite mass are just vector-valued measures, with values in the space of k-multi-vectors.
Thinking about currents simply as measures can make them easier to work with.
Transport of currents
In a project with Paolo Bonicatto and Filip Rindler we have studied the Geometric Transport Equation (GTE) for currents:
This equation involves the Lie derivative of currents and is a generalization of both the continuity equation (0-dimensional case) and the transport equation (top-dimensional case). It aims to describe the motion of rough/singular surfaces driven by a given vector field b, possibly time-dependent. In the case of 1-surfaces, i.e., curves, (GTE) reduces to the ideal induction equation, and it can also be used to model the motion of dislocations in crystalline materials.
In [9] we developed the basic framework for the equation and looked at some fundamental properties of solutions. One question was the following: given a time-parametrized path of currents, that move in an absolutely continuous way (with respect to the flat norm), can we find a vector field that transports them, namely that solves the GTE? The answer is related to a Sard-type property possessed by the solution. One example where the answer is negative is pictured below: the levelsets of the flat mountain.
In [11] we proved existence and uniqueness for the initial value problem for the GTE driven by an autonomous (i.e., time-independent) Lipschitz vector field. The key tool to establish this is to look at the PDE in the space-time, and then use some results related to the decomposability bundle of the space-time current.
Later [16] we established the same well-posedness result when the vector field is time-dependent, Lipschitz in space and merely integrable in time. The strategy is still based on the decomposability bundle, but in this case the flow associated with the vector field is only absolutely continuous (AC) in time, rather than Lipschitz. For this reason we first need to extend some differentiability results related to the decomposability bundle to the class of functions that are Lipschitz in space but only AC in time.
Currents in metric spaces
The theory of currents has been extended to metric spaces by Ambrosio and Kirchheim. Together with Paolo Bonicatto and Enrico Pasqualetto, in [7] we have extended to metric spaces the following two classical facts:
Every integral 1-current is the sum of at most countably many (currents associated with) simple Lipschitz curves;
Every integral k-current admits a decomposition in indecomposable components.
With Raquel Perales in [10] we have generalized to metric currents the following rigidity property: a 1-Lipschitz map from Rn to Rn that preserves volumes is necessarily an isometry.
The picture represents the fifth iteration in the construction of the flat mountain: it is the graph of a BV function from [0,1]2 to [0,1] with the property that its derivative is zero (Lebesgue-)almost everywhere. Its level sets can be viewed as a time-parametrized family of integral 1-currents, that evolve in an absolutely continuous way with respect to the flat distance. Nonetheless, these currents are not transported by any vector field. Picture and result from [9].
In 2014 Bourgain, Brezis and Mironescu [BBM] introduced a family of BMO-type functionals (indexed by a positive parameter ε) for functions f on Rn. For a given ε>0, these functionals measure the maximum possible sum of the mean oscillation of f on a family of disjoint ε-cubes. Ambrosio, Bourgain, Brezis and Figalli [ABBF] showed that, when f is a characteristic function of a set E, in the limit ε → 0 these functionals converge to 1/2 times the perimeter of E. This gives an alternative characterization of perimeter, that does not rely on the distributional derivatives of f.
Subsequent works have extended this result to SBV functions, showing that the limit as ε → 0 converges to 1/2 times the jump part of |Df|, plus 1/4 times the absolutely continuous part of |Df|. However, for a BV function with a Cantor part these functionals may oscillate as ε → 0, thus hindering the existence of a limit.
My work in this area, in collaboration with Adolfo Arroyo-Rabasa and Paolo Bonicatto, has concerned two different approaches to the latter problem, that are capable of dealing with BV functions having a non-trivial Cantor part.
In a first paper [8] we answered a question left in [ABBF], proving that if one changes the notion of convergence to that of Γ-convergence, then the limit is 1/4 times the total variation of f for all BV functions f.
In a second paper [13] we modified the family of functionals by allowing for families of cubes with size at most ε (as opposed to equal to ε). This natural relaxation allows to obtain a limit for every BV function f, which we characterize in terms of the blowups (also called tangents) of f.
A set E and a family of cubes that (supposedly) maximizes the BMO-type oscillation functionals for the characteristic function of E. The cubes like to lie half inside and half outside the set, aligned with the boundary. In this way the mean oscillation of the characteristic function inside each cube is roughly 1/2. This is also (crucially) the best constant for Poincare's inequality in the cube.
The guiding question in the area is:
Can we mathematically prove the emergence of large-scale structures starting from small-scale interactions?
A very simple model to see this in action is the sticky-disk model: a finite number of particles x1 , ... , xN in the plane interact by means of an energy induced by a pairwise potential V : [0,∞) → [0,∞] that strongly favors distance 1 among particles:
It is called sticky-disk energy because you can imagine N non-overlapping disks of radius ½, which like to be tangent to each other (the potential favors tangencies by giving a -1 to the total energy).
We seek minimizers of this energy for a given fixed number of particles N.
In 1980 Heitmann and Radin showed that minimizers for this energy are crystallized, namely they are subsets of a triangular lattice with step 1. One year later, Radin showed that the same holds true for the less singular soft-disk potential. Moreover, their global shape is roughly hexagonal when N → ∞. Similar questions for less singular (and more physical) potentials V remain still open.
In [1] I showed that the sticky-disk model can be seen as a suitable "dewetting" limit of multi-bubble isoperimetric problems, which I was studying for my PhD thesis.
Together with Lucia De Luca, in [12] we gave a new proof of the soft-disk result by Radin mentioned above (obtaining the sharp range for the allowed potentials), using an energy decomposition for planar graphs proved earlier by Lucia De Luca and Gero Friesecke.
More recently, in [15] we considered the anisotropic sticky-disk model, where distances between particles are evaluated through the supremum norm in the plane. We show that in this case minimizers crystallize on the square lattice, and their global shape is asymptotically close to an octagon (the result is stated in terms of Gamma-convergence of the energy functionals). We show this by proving an energy decomposition, reminiscent of the one in De Luca-Friesecke, but which works also for a certain class of non-planar graphs.
Together with Mircea Petrache, in [5] we considered the case where the particles are constrained to belong to a predetermined (quasi-periodic) lattice, but the potential V can also be long-ranged, namely particles can interact with next-to-nearest-neighbors and beyond. We show through Gamma-convergence that the global shape of minimizers is determined by the potential as a Minkowski sum of the segments associated with the "activated bonds". One consequence of this is that, on the Penrose tiling, subsets of N tiles that minimize the perimeter of their union are close to a regular decagon when N is large (see picture below).
A portion of the Penrose tiling, a quasiperiodic tiling of the plane obtained with two types of rhombuses.
One can consider an isoperimetric problem on this tiling: among all configurations of N tiles, find those that minimize the perimeter of their union. As a consequence of the results in [5], when N → ∞ minimizers converge (after a rescaling) to a regular decagon.