I'm a Master's student in Computer Science at Brown University, privileged to be advised by Prof. Michael Littman and Prof. Stephen Bach.
Before joining Brown, I earned my Bachelor's degree in Computer Science at UC San Diego with the highest distinction. I had the privilege of working in Prof. Hao Su's Lab and Prof. Sicun Gao's group on sample efficient reinforcement learning. I spend three years as HPC research engineer at San Diego Supercomputer Center under Dr. Mary Thomas's supervision. I (re)founded Supercomputing @ UCSD and served as the President during 2019-2022.
GitHub: @SeuperHakkerJa Google Scholar: @Xiaochen Li X/Twitter: @jacobli99
Email: xiaochen_li [at] brown [dot] edu
Research Interest
My research focuses on advancing our understanding of neural network training and inference dynamics to enhance generalization, training efficiency, and uncertainty estimation. While current AI systems are powerful, our comprehension of their inner workings remains limited. My goal is to deepen this understanding to develop more robust and controllable AI systems. Specific interests:
Investigating behaviors and phenomena within neural networks that go beyond their mere training objectives, or mesa-optimizers. These unexpected behaviors are crucial for understanding both the beneficial emergent properties and potential risks of AI systems. My recent work leverages tools from mechanistic interpretability to explore the cross-lingual generalization within the embedding spaces of transformers.
Analyzing 'modules' and pipelines to build better AI systems. My recent work rigorously benchmarks language models' capabilities in using planners specifically focusing on translating natural languages into PDDL, which helps calibrate our expectations regarding uncertainties in AI systems.
Enhancing the safety and utility of language models in multilingual contexts.
Max Zuo, Francisco Piedrahita Velez, Xiaochen Li, Michael L. Littman, Stephen H. Bach
Preprint (submitted to NeurIPS 2024 Datasets and Benchmarks Track)
[GitHub Page] [paper]
Stone Tao, Xiaochen Li, Tongzhou Mu, Zhiao Huang, Yuzhe Qin, Hao Su
ICML 2023
[GitHub Page][Project Page] [paper] [video] [slide]
Arunav Gupta, John Ge, John Li, Zihao Kong, Kaiwen He, Matthew Mikhailov, Xiaochen Li, Maximilliam Apodaca, Mary P Thomas, Paul Rodriguez, Mahidar Tatineni, Santosh Bhatt, Bryan Chin
IEEE Transactions on Parallel and Distributed Systems ( Volume: 34, Issue: 6, June 2023)
[paper]
Zhiao Huang, Xiaochen Li, Hao Su
RSS 2021 Workshop on Declarative and Neurosymbolic Representations in Robot Learning and Control
Xiaochen Li, Maximilliam Apodaca, Arunv Gupta, Zihao Kong, Hongyi Pan, Hongyu Zou, Lewis Carrol, Marty Kandes, Zhaoyi Li, Mahidhar Tetinani, Mary P Thomas
IEEE Transactions on Parallel and Distributed System ( Volume: 33, Issue: 9, 01 September 2022)
[GitHub Page][paper]