Angeliki Giannou
PhD student
University of Wisconsin, Madison
Email: giannou@wisc.edu
Angeliki Giannou
PhD student
University of Wisconsin, Madison
Email: giannou@wisc.edu
How Well Can Transformers Emulate In-context Newton's Method?
Angeliki Giannou, Liu Yang, Tianhao Wang, Dimitris Papailiopoulos and Jason Lee.
Arxiv Link
Looped Transformers as Programmable Computers
Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason Lee, Dimitris Papailiopoulos (ICML 2023).
Arxiv link
The Expressive Power of Tuning Only the Normalization Layers
Angeliki Giannou, Shashank Rajput, Dimireis Papailiopoulos (COLT 2023).
Arxiv link
On the convergence of policy gradient methods to Nash equilibria in general stochastic games
Angeliki Giannou, Kyriakos Lotidis, Emmanouil Vlatakis, Panayotis Mertikopoulos (NeurIPS 2022).
On the Rate of Convergence of Regularized Learning in Games: From Bandits and Uncertainty to Optimism and Beyond
Angeliki Giannou, Emmanouil Vlatakis, Panayotis Mertikopoulos (NeurIPS 2021).
Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information
Angeliki Giannou, Emmanouil Vlatakis, Panayotis Mertikopoulos (COLT 2021).
Arxiv link
For an updated list of publications, please visit my Google Scholar