About Me
About Me
I am a final-year Ph.D. candidate studying robotics at Princeton University in the Department of Mechanical and Aerospace Engineering.
I am a member of the Intelligent Robot Motion Laboratory (IRoM Lab), advised by Professor Anirudha Majumdar.
Research Overview
The core interests of my work are twofold, broadly within the paradigm of decision-making under uncertainty.
First, I am interested in the development of theoretical frameworks that provide guarantees on robot processes in both the stochastic and nonstochastic uncertainty regimes, with an emphasis on compatibility with online (sequential) data accumulation and performance evaluation. The dominant challenge in this regard is to construct guarantees that are simultaneously non-vacuous, efficiently computable, and practically useful.
Second, I am interested in understanding and elucidating an epistemological framework for, and the fundamental limits of, robotic systems. The present emphasis on empirical performance induces many natural questions which still remain difficult to answer: is Task A easier or harder than Task B, and in what sense?; is Task C feasible for a particular robot, and how can we know?; how do we compare Policy A attempting Task A against Policy B attempting Task B? Beyond these questions, many existing fundamental limits inherited from other disciplines are often under-emphasized. A direct example is sample complexity for statistically significant evaluation procedures developed within the statistical testing community.
Nonstochastic Uncertainty Regime
The core technique utilized here is online learning and online convex optimization ([1], [2]). In particular, my interests lie in `lifting' nonasymptotic guarantees from interoceptive settings (control theory) to exteroceptive settings (e.g., obstacle avoidance) that require policy synthesis. These include:
Generating adversarial disturbances online to find weaknesses in controllers [3]
Generating an obstacle-avoidance controller to navigate cluttered environments [4]
Furthermore, I am interested in extending these techniques to `meta-procedures' in robotics, including active data collection, representation learning (ongoing), and policy evaluation.
Stochastic Uncertainty Regime
The core technique utilized here is PAC-Bayesian generalization bounds ([1], [2]). Here, I am interested in meta-evaluation procedures that require synthesis of small (but nonconvex) decision functions, as well as understanding the tightness properties of the guarantees themselves. For example:
Synthesizing policy-dependent failure predictors [3]
(Ongoing) Tighter PAC-Bayes bounds for overparameterized policies
Additionally, I am interested in purer 'statistical engineering problems,' as with sequential procedures for comparative policy evaluation (ongoing).