I am doing math, more precisely probability theory.
Let me tell you about some of my research projects in three different ways:
ELI5
="explain like I'm 5", or in this case rather "tell a story like I'm not a mathematician", which should motivate to dive into the math.
Abstract
for purist mathematicians.
Visualization
Sometimes, a picture or simulation is worth a thousand words.
In the following projects, several papers have been merged and the list is not exhaustive. See papers for a full list of publications. The following illustrations surely don't replace rigorous math and should rather motivate to read the corresponding paper!
Also note that (by construction) this page is and will always be under construction.
ELI5: This is the story of a zero of a (random) polynomial, called Z.
Once upon a time (t=0), Z lived happily in the complex plane. It was born as one of the roots of its mother polynomial (who has random coefficients). As the polynomial degree was growing, our Z gained more and more siblings. It is well known that the zeros live according to the circular law. But then, as time t went on, a mysterious force called the heat flow operator started acting on the polynomial. Under his influence, our zero Z and his company began their migration into the direction of the real world. First, they found themselves in a new habitat, the elliptic law. Then, later at time t=1 the zeros collapse to the semicircle distribution on the real line and where they kept living a real life.
Abstract: We investigate the evolution of the empirical distribution of the complex roots of high-degree random polynomials, when the polynomial undergoes the heat flow. In one prominent example of Weyl polynomials, the limiting zero distribution evolves from the circular law into the elliptic law until it collapses to the semicircle law, as was recently conjectured for characteristic polynomials of random matrices by Hall and Ho, 2022. Moreover, for a general family of random polynomials with independent coefficients and isotropic limiting distribution of zeros, we determine the zero distribution of the heat-evolved polynomials in terms of its logarithmic potential. Furthermore, we explicitly identify two critical time thresholds, at which singularities develop and at which the limiting distribution collapses to the semicircle law. We completely characterize the limiting root distribution of the heat-evolved polynomials before singularities develop as the push-forward of the initial distribution under a transport map. Finally, we discuss the results from the perspectives of partial differential equations (in particular Hamilton–Jacobi equation and Burgers equation), optimal transport, and free probability.
Zeros of heat evolved Weyl polynomials. Before the heat flow, the large n limit is the circular law. After the heat flow, it is the elliptic law (uniform on the ellipse). After time t>1, the limit will be the semicircle law of variance t. We identify a "transport map" that describes (macroscopically) the trajectories in the picture.
Zeros of heat evolved Kac polynomials. The large n limit is not easy to be described via a density, but via its logarithmic potential.
Zeros of heat evolved random exponential polynomials.
Evolution of complex roots of random polynomials under repeated differentiation
[2023- arXiv.]
ELI5 (continued from the heat flow): Out there, in the vast universe there are other families of polynomials and flows, each with its own unique tale to tell. In the story of differentiation flow of the mother polynomial, our zero Z moves along a different path! There, it leads towards the center until no degree of the polynomial is left to be differentiated and ultimately, our zero dies.
But there is hope! Maybe Z still lives, we only need to find where...
Abstract: For repeated differentiation, we start with a random polynomial P^N of degree N with independent coefficients and consider a new polynomial P_t^N obtained by repeated applications of a fraction differential operator of the form z^a (d/dz)^b, where a and b are real numbers. When b>0, we compute the limiting root distribution \mu_t of P_t^N as N\rightarrow\infty. We show that \mu_t is the push-forward of the limiting root distribution of P^N under a transport map T_t. The map T_t is defined by flowing along the characteristic curves of the PDE satisfied by the log potential of \mu_t. In the special case of repeated differentiation, our results may be interpreted as saying that the roots evolve radially with constant speed until they hit the origin, at which point, they cease to exist. For general a and b, the transport map T_t has a free probability interpretation as multiplication of an R-diagonal operator by an R-diagonal "transport operator." As an application, we obtain a push-forward characterization of the free self-convolution semigroup \oplus of radial measures on \mathbb{C}.
Zeros of repeatedly differentiatiated Weyl polynomials. The large n limit is not the circular law anymore as there is more mass near the origin.
Zeros of Weyl polynomials after repeatedly applying z^{-1}d/dz. The large n limit is again the circular law.
Zeros of repeatedly integrated Weyl polynomials.
ELI5: You are the ruler of the beer-land, that's shaped like a circle (see figure) with 100 Breweries randomly scattered around (the black dots). Hop is growing all around the country and you have to divide this huge round field into 100 farms (colored regions) such that every brewery receives an equal 1% share of the hop. That sounds fair!
The farmers transport the hop from the crops directly to the brewerys, but they are payed 'per distance travelled'. Hence, the farms should be somehow close to the associated brewery.
How do you find the optimal division to minimize the transport cost? What is the lowest cost you have to pay? How does it depend on the locations of the breweries?
Abstract: We investigate the Wasserstein distance between the empirical spectral distribution of non-Hermitian random matrices and the circular law. For Ginibre matrices, we obtain an optimal rate of convergence n^{−1/2} in 1-Wasserstein distance. This shows that the expected transport cost of complex eigenvalues to the uniform measure on the unit disk decays faster (due to the repulsive behaviour) compared to that of i.i.d. points, which is known to include a logarithmic factor. For non-Gaussian entry distributions with finite moments, we also show that the rate of convergence nearly attains this optimal rate.
100 iid uniformly distributed points, and the equal sized cells that are the support of the optimal transport map from Lebesgue to the points.
100 eigenvalues of a Ginibre matrix and the corresponding optimal transport map
100 eigenvalues of a discrete random matrix and the corresponding optimal transport map
Note how the cells become stretched in the iid case, and much more regular in the case of eigenvalues. The visual difference can be made precise for Ginibre matrices in terms of the Wasserstein distance: It is smaller in the case of eigenvalues.
Same as above, for 1000 iid points.
Same as above, for 1000 eigenvalues.
And here's an animated gif of the gradient descent procedure how all the above pictures have been simulated. (it should be moving, else reload)
ELI5: Imagine an election in Randomnistan, where each citizen lives in one of four states (the 4 blocks) and votes completely random for one of three parties (blue, yellow, green).
It is a strange country in which peer pressure is more important than the individual opinions, which are completely random. People tend to like more what the mainstream in the same state likes, and they are also by the votes in neighboring states. If most votes are blue in my state and yellow in the neighboring, then I am also likely to vote blue, maybe yellow, but probably not green!
If the influence is low and the randomness is high, then all parties will roughly have the same number of votes. But the actual outcome of the election will be random and depends very much on the influences between the states, too!
[If every state would dislike the opinion of its neighbors, then one of the states will probably get very frustrated when they notice that no party is left to be liked...]
Abstract: We study the block spin mean-field Potts model, in which the spins are divided into s blocks and can take q≥2 different values (colors). Each block is allowed to contain a different proportion of vertices and behaves itself like a mean-field Ising/Potts model which also interacts with other blocks according to different temperatures. Of particular interest is the behavior of the magnetization, which counts the number of colors appearing in the distinct blocks. We prove central limit theorems for the magnetization in the generalized high temperature regime and provide a moderate deviation principle for its fluctuations on lower scalings. More precisely, the magnetization concentrates around the uniform vector of all colors with an explicit, but singular, Gaussian distribution.
Here, we see a frustration phenomenon, occuring for negative interaction between the blocks, and more blocks than colors. (Block two does not want to vote for neither blue, yellow nor green) Finding the CLT in this case is still an open problem.
ELI5: Take a sheet of paper and draw some points into a circle. Yes, just like this. More and more points... very uniformly distributed... A few more points here and there... and don't worry if a few fall outside the circle. Okay, let's say 50 points are enough. And now compare it to the image on the right, where Jean Ginibre drew some points.
How do we find out whose points are better distributed? If there are big empty areas, or clusters, that'd be bad. So, we look for (round) areas with too many or too few points in it, like in the picture. Finding these is hard, but they always give you a score (='Kolmogorov distance') that tells if your points are more evenly distributed, or Jean's - and he is very good at uniformly distributing points!
Abstract: In this project, we investigate the Kolmogorov distance of the Circular Law (and its powers) to the empirical spectral distribution of non-Hermitian random matrices with independent entries (and products of such). The optimal rate of convergence is determined by the Ginibre ensemble and is given by n−1/2. A smoothing inequality for complex measures that quantitatively relates the uniform Kolmogorov-like distance to the concentration of logarithmic potentials is shown. Combining it with results from Local Circular Laws, we apply it to prove nearly optimal rate of convergence to the Circular Law in Kolmogorov distance. Furthermore we show that the same rate of convergence holds for the empirical measure of the roots of Weyl random polynomials and for products of matrices with independent entries in the bulk up to a logarithmic factor.
Eigenvalues of a 50x50-Ginibre-matrix. The green ball contains too many points, given its size. The Kolmogorov distance is attained here.
Eigenvalues of a 100x100-matrix. The green ball contains too few points, given its size.
Eigenvalues of a 500x500-matrix. Again, the green ball is the "worst ball" where the uniform measure has largest distance to the empirical distance, attaining the Kolmogorov distance.
The plot shows the absolute difference of masses of a ball the empirical distribution and the uniform distribution, as a function of the radius. In blue is a centered ball, and in orange the center is shifted by 0.2. This is a single realization of a matrix of size n=5000.
The same plot as left, only for the sample mean ESD’s, averaged over 20 independent matrices of size n=5000. Obviously, the rate of convergence is faster in the interior, and the "worst ball" tends to be the unit ball.