Angeliki Giannou
PhD student
University of Wisconsin, Madison
Email: giannou@wisc.edu
Angeliki Giannou
PhD student
University of Wisconsin, Madison
Email: giannou@wisc.edu
How Well Can Transformers Emulate In-context Newton's Method?
Angeliki Giannou, Liu Yang, Tianhao Wang, Dimitris Papailiopoulos and Jason Lee (AISTATS 2025).
Arxiv Link
Looped Transformers as Programmable Computers
Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason Lee, Dimitris Papailiopoulos (ICML 2023).
Arxiv link
The Expressive Power of Tuning Only the Normalization Layers
Angeliki Giannou, Shashank Rajput, Dimireis Papailiopoulos (COLT 2023).
Arxiv link
Dissecting Chain of Thought: A study on compositional in-context learning of mlps.
Yingcong Li, Kartik Sreenivasan, Angeliki Giannou, Dimitris Papailiopoulos, Samet Oymak (NeurIPS 2023).
Link
On the Rate of Convergence of Regularized Learning in Games: From Bandits and Uncertainty to Optimism and Beyond
Angeliki Giannou, Emmanouil Vlatakis, Panayotis Mertikopoulos (NeurIPS 2021).
Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information
Angeliki Giannou, Emmanouil Vlatakis, Panayotis Mertikopoulos (COLT 2021).
Arxiv link
For an updated list of publications, please visit my Google Scholar