Linear operators on infinite dimensional Hilbert spaces are, in many respects, hard objects to study. [as 'evidence', we still do not know whether every linear operator has a non-trivial subspace it leaves invariant].
Operators in tracial von Neumann algebras are of particular interest because they behave `like matrices' in many respects. One can assign a probability measure to the spectrum of every matrix, whose weights are given by the dimensions of its generalized eigenspaces. An operator in a tracial von Neumann algebra has an analogous probability measure supported on its spectrum (the Brown measure) and invariant subspaces (the Haagerup-Schultz subspaces) associated with every subset of their spectrum.
Tracial von Neumann algebras can be thought of as non-commutative probability spaces, where the trace plays the role of an expectation. Free probability theory, initiated by Voiculescu, seeks to use probabilistic tools and language to study the free group von Neumann algebras. Many theorems in classical probability theory have free probabilistic analogues. Notably, there is a `free central limit theorem', and free-probabilistic versions of the real and complex Gaussians, known as the semicircular and circular operators. Free probability theory has deep connections to random matrix theory, as the `distributional limit' of many classes of large random matrices is some operator living in a tracial vNa. For example, the Brown measure of the circular operator is the uniform measure on the unit disc, which is also the limiting empirical spectral measure of random matrices whose entries are independent and identically distributed random variables with mean 0 and variance 1.
In my thesis, I studied the angles between the spectral subspaces of operators in a tracial von Neumann algebra. In joint work with my advisor Ken Dykema [arXiv:1907.10685], we showed that the angles between Haagerup-Schultz subspaces of complementary sets are uniformly bounded away from zero iff the operator is similar to N+Q, where N is a normal operator, Q has Brown measure concentrated at 0, and N,Q commute with each other. As this decomposition follows as a direct consequence of the Jordan canonical form for matrices, we call this a `Jordan-like' decomposition . More consequentially, we devised methods using tools from free probability theory to compute these angles for a large class of operators arising as the limit of large random matrices, such as the circular operator. In a subsequent set of papers[arXiv:2012.00903, arXiv:2305.07987 ], we showed that many 'DT'-operators (including the circular operator) fail to have this non-zero angle property and therefore cannot have a `Jordan-like' decomposition.
The second major component of my research has involved non-commutative harmonic analysis. Given a function on a locally compact group G, one may define a corresponding 'Fourier multiplier' operator on the group's von Neumann algebra (in the commutative/Euclidean case, these are in fact the Fourier multipliers one encounters in a harmonic analysis course). The behavior of these multipliers has been used to provide insight into the structure of the group von Neumann algebra.
One may also view these multipliers as (possibly unbounded) operators on the non-commutative L^p spaces associated with the group von Neumann algebra. Just as in the classical case, there exist numerous results (such as a non-commutative Hormander-Mihlin theorem) about the L^p operator norms of these multipliers. A natural question is to ask how these multipliers behave with respect to subgroups. This is the content of the classical de Leeuw theorem, which says that a function restricted to a subgroup generates a multiplier whose L^p-norm is bounded above by that of the original Fourier multiplier.
In the Euclidean setting, multivariable functions are used to generate multi-linear Fourier multipliers, whose behavior is sometimes markedly different from their linear counterparts. In [arxiv:2201.10400], we initiated the study of non-commutative multilinear multipliers, and obtained an exact analogue of the linear de Leeuw restriction theorem. This allowed us to construct non-trivial examples of bounded bilinear multipliers on the Heisenberg group. We also proved a `local' version of the de Leeuw theorem, where the upper bound is multiplied by some constant which depends on the support of the function. In addition, we showed how to explicitly compute this constant for real reductive Lie groups.
Schur multipliers are the 'naive version' of matrix multiplication (i.e. given a matrix A, the Schur multiplier acts on matrices via entry-wise multiplication). Given a function f (on the reals, say), one may construct a Schur multiplier out of the matrix whose x,y entry is f(x-y). There exist multiple `transference' results which relate the completely bounded norm of the Schur multiplier of a function to the norm of the corresponding Fourier multiplier. Multilinear Schur multipliers exist, and have been used to prove several surprising results, such as the resolution of Koplienko's conjecture on spectral shifts.
In arxiv:2206.00549, we utilized techniques from our work on the restriction theorem to prove that there is a transference result which relates appropriate operator norms of multilinear Fourier and Schur multipliers. This proof also yields a slight improvement of the linear L^p results. Additionally, our techniques allow us to show that the vector-valued bilinear Hilbert transform on the reals is not completely bounded when the range space has exponent 1.
During my time as a postdoc at the National Institute of Standards and Technology (NIST), I have also ventured into problems on the applied side.
A flow cytometer is an instrument that shines a laser at a stream of cells and measures the resulting fluorescence emitted by them. Flow cytometry is widely used in medical research and diagnostics, for instance in cancer screening. One of the issues with flow cytometry is the inability to re-measure the same cell more than once. This means that is a difficult problem to distinguish how much of the variance of the measured values is due to instrumental noise versus inherent population variability. It is however possible to re-measure selected subsets of cells, using an instrument called a cell sorter. In [arxiv:2409.17017], we devise a simple experiment and associated models of the underlying noise to construct asymptotically consistent estimators of instrumental variance. This provides the first known method of quantifying the performance of different flow cytometry instruments.
A mass spectrometer is a common tool used to distinguish compounds. Briefly, it works by splitting a compound into constituent ions, separating the ions by applying an electromagnetic field, and measuring the intensities of the ions, which are grouped according to their mass-to-charge (m/z) ratios. A mass spectrum can therefore be thought of as a function taking m/z values to intensities. Due to truncation and binning, most instruments store this function as a sparse vector in some d-dimensional space. Many drugs of forensic interest are isomers, and therefore have very similar mass spectra. A central question of interest to many communities is how to distinguish compounds with extremely similar mass spectra. In an ongoing project, we explore the use of techniques from optimal transport, statistical inference and random matrix theory to construct new metrics on the space of mass spectra. Our work has yielded robust methods capable of separating previously indistinguishable isomers.
Disease diagnostics often involves the measurement of various markers in a patient's blood serum, and subsequent classification as positive or negative based on previously determined thresholds. The optimal classification boundary (i.e one that yields the minimal error rate) for a population depends on the actual prevalence of the disease in question, and needs to be updated as the disease progresses through a population. Recent work in this direction has involved the construction of novel `level-set based estimators' to determine these boundaries. The validity of these estimators has so far relied on numerical evidence. In ongoing work, I am using results from empirical process theory and concentration of measure to provide proofs of convergence of these estimators.