The Georgia Tech ACO Student Seminar is run by students in the Algorithms, Combinatorics, & Optimization program at Georgia Tech.
The purpose of this seminar is to keep students updated with current research, and to give students a venue to present their work. Any topic related (but not restricted) to algorithms, combinatorics, and optimization is very welcome. You can present research results, demo a work in progress, or just share something of general interest. There will also be occasional talks by ACO faculty and visitors. (Post-docs are welcome too!)
In Spring 2026, the seminar will meet on Fridays in Skiles 006 from 1-2pm. For more information please refer to the announcement sent by the organizers (or contact them directly). Subscribe to the mailing list aco-announce (or send an email to sympa@lists.gatech.edu with the subject line "subscribe aco-announce") to receive the announcements regularly.
If you are interested in giving a talk, you can contact any of the organizers: Aiya Kuchukova, Albert Weng, Jade Lintott, Yuexing (April) Niu.
January 23rd: Mirabel Reid (Georgia Tech)
The Dynamics of Cognition
Abstract:
How does the local firing of neurons in the brain translate to perception, memory, and decision making? This talk will survey neural population models that tackle this question through the lens of dynamical systems. I will focus on two core phenomena --- decision making via competition and memory via attractor dynamics --- and highlight both classical results and recent developments in the field. Beyond their neuroscientific motivation, these models exhibit mathematical structure analogous to varied problems in applied mathematics.
January 30rd: Zihao He (Georgia Tech)
Temporary Immunity Does Not Restore a Positive Epidemic Threshold for SIRS on Power--Law Networks
Abstract:
We study the SIRS process on sparse random graphs with power--law degree distributions.
A large physics literature reports numerical evidence for a positive epidemic threshold for SIRS with waning immunity on scale--free networks, suggesting a transition between short--lived and exponentially long--lived regimes.
In contrast, for the SIS/contact process on power--law graphs with exponent $\tau>3$, it is rigorously known that the critical value is $\lambda_c=0$ and that survival is exponentially long for every $\lambda>0$.
We show that, in a survival--time sense, the true threshold for SIRS on power--law random graphs with $\tau>3$ is also zero. Joint work with Debankur Mukherjee and Souvik Dhara.
Abstract:
I will discuss the problem of finding confidence sets for arbitrary distributions. This is a fundamental statistical task with connections to many problems, like support estimation, robust estimation, and conformal prediction. Given samples from a distribution D, the goal is to find a small set that contains at least a target 1 - \alpha probability mass of D. We study this in a setup similar to PAC learning, where, for a fixed family of sets F, our goal is to find a set that achieves coverage on D, and has volume comparable to the smallest set in F that achieves coverage on D.
In the first part of the talk, I will discuss some work in the high-dimensional setting, where D is supported on \mathbb{R}^d. We give an efficient algorithm that learns a confidence ellipsoid that has volume approximately that of the smallest-volume confidence ellipsoid of bounded condition number.
In the second part of the talk, I will discuss an online formulation of the confidence set problem. We show that the problem of learning confidence sets looks quite different from the standard online learning problem. This talk is based on joint work with Chao Gao, Liren Shan, and Aravindan Vijayaraghavan.
February 20th: Jade Lintott (Georgia Tech)
TBD
Abstract: TBD
March 6th: Hoang Huy Nguyen (Georgia Tech)
Almost Sure Convergence of Nonlinear Stochastic Approximation: An Interplay of Noise and Step Size
Abstract: We study the almost sure convergence of the Stochastic Approximation algorithm to the fixed point $x^\star$ of a nonlinear operator under a negative drift condition and a general noise sequence with finite $p$-th moment for some $p > 1$. Classical almost sure convergence results of Stochastic Approximation are mostly analyzed for the square-integrable noise setting, and it is shown that any non-summable but square-summable step size sequence is sufficient to obtain almost sure convergence. However, such a limitation prevents wider algorithmic application. In particular, many applications in Machine Learning and Operations Research admit heavy-tailed noise with infinite variance, rendering such guarantees inapplicable. On the other hand, when a stronger condition on the noise is available, such guarantees on the step size would be too conservative, as practitioners would like to pick a larger step size for a more preferable convergence behavior. To this end, we show that any non-summable but $p$-th power summable step size sequence is sufficient to guarantee almost sure convergence, covering the gap in the literature.
Our guarantees are obtained using a universal Lyapunov drift argument. For the regime $p \in (1, 2)$, we show that using the Lyapunov function $\norm{x-x^\star}^p$ and applying a Taylor-like bound suffice. For $p > 2$, such an approach is no longer applicable, and therefore, we introduce a novel iterate projection technique to control the nonlinear terms produced by high-moment bounds and multiplicative noise. We believe our proof techniques and their implications could be of independent interest and pave the way for finite-time analysis of Stochastic Approximation under a general noise condition. This is a joint work with Quang D. T. Nguyen, Duc Anh Nguyen, and Prof. Siva Theja Maguluri.
April 10th: Aditya Lonkar (Georgia Tech)
Inference in Planted Sparse Graphs
Abstract: TBD