One World Mathematics of INformation, Data, and Signals (1W-MINDS) Seminar

The 1W-MINDS Seminar was founded in the early days of the COVID-19 pandemic to mitigate the impossibility of travel.  We have chosen to continue the seminar since to help form the basis of an inclusive community interested in mathematical data science, computational harmonic analysis, and related applications by providing free access to high quality talks without the need to travel.  In the spirit of environmental and social sustainability, we welcome you to participate in both the seminar, and our slack channel community!  Zoom talks are held on Thursdays either at 2:30 pm New York time or at 10:00 am Paris /4:00 pm summer Shanghai time/ 5:00 pm winter Shanghai time.  To find and join the 1W-MINDS slack channel, please click here.

Current Organizers (September 2023 - May 2024):  Axel Flinth (Principal Organizer for Europe/Asia, Umeå University), Longxiu Huang (Principal Organizer for The Americas, Michigan State University), Alex Cloninger (UC San Diego), Mark Iwen (Michigan State University), Rima Alaifari (ETH Zürich) Santhosh Karnik (Michigan State University), Weilin Li (City College of New York), Yong Sheng Soh (National University of Singapore), and Yuying Xie (Michigan State University).

To sign up to receive email announcements about upcoming talks, click here.
To join MINDS slack channel, click here.


The organizers would like to acknowledge support from the Michigan State University Department of Mathematics.  Thank you.

Zoom Link for all 2:30 pm New York time Talks: New York link 

Passcode: the smallest prime > 100 

Zoom Link for all 10:00 am Paris/4:00 pm Summer Shanghai/5 pm Winter Shanghai time Talks: Paris/Shanghai link

Passcode: The integer part and first five decimals of e (Eulers number)

FUTURE TALKS

Differentially private M-estimation via noisy optimization

We present a noisy composite gradient descent algorithm for differentially private statistical estimation in high dimensions. We begin by providing general rates of convergence for the parameter error of successive iterates under assumptions of local restricted strong convexity and local restricted smoothness. Our analysis is local, in that it ensures a linear rate of convergence when the initial iterate lies within a constant-radius region of the true parameter. At each iterate, multivariate Gaussian noise is added to the gradient in order to guarantee that the output satisfies Gaussian differential privacy. We then derive consequences of our theory for linear regression and mean estimation. Motivated by M-estimators used in robust statistics, we study loss functions which downweight the contribution of individual data points in such a way that the sensitivity of function gradients is guaranteed to be bounded, even without the usual assumption that our data lie in a bounded domain. We prove that the objective functions thus obtained indeed satisfy the restricted convexity and restricted smoothness conditions required for our general theory. We then show how the private estimators obtained by noisy composite gradient descent may be used to obtain differentially private confidence intervals for regression coefficients, by leveraging work in Lasso debiasing proposed in high-dimensional statistics. We complement our theoretical results with simulations that illustrate the favorable finite-sample performance of our methods.