My research interests lie in large language model sparsity, LLM interpretability, and the learning dynamics of LLM. I am particularly interested in the learning dynamics of LLMs and the emergence of low-dimensional structure in their representations, and in turning these insights into practical methods for improving LLM efficiency and trustworthiness.
Outside of research, I enjoy movies, hiking, badminton, squash, tennis, and reading novels and mathematics books.
🗞️ News ‼️
Sept 2025, I submitted my first paper, "Beyond Taylor Expansion: Intermediate Activation Perspectives in Structured Pruning", to ICLR.