GitHubTwitterGoogle ScholarSemantic Scholar

Hi, I'm Deqing Fu (傅 德卿). 

I’m a third-year Ph.D. student in Computer Science at the University of Southern California (USC). My main research interests are deep learning theory, natural language processing, and the interpretability of AI systems. I'm (co-)advised by Prof. Vatsal Sharan of USC Theory Group and Prof. Robin Jia of Allegro Lab within USC NLP Group, and I'm working closely with Prof. Mahdi Soltanolkotabi and Prof. Shang-hua Teng.

Before that, I did my undergraduate and master's at the University of Chicago, in Mathematics (with Honors), Computer Science (with Honors), and Statistics. 

I have a broad interest in machine learning and deep learning. My interests include but are not limited to, deep learning theory, interpretability of large language models, and deep generative models.

Links: Google Scholar, Semantic Scholar, GitHub, and CV 

Email: [First][Last] at USC dot EDU

News

Nov 2024 I'll be attending SoCalNLP 2024 presenting IsoBench, DeLLMa, Fourier, and Simplicity Bias.

Nov 2024 I'll be visiting Simons Institute at UC Berkeley for the Modern Paradigms in Generalization program

Sep 2024 Two papers (ICL and Fourier) got accepted to NeurIPS 2024!

July 2024 IsoBench is accepted to the 1st COLM conference! 

June 2024 New preprint: Language pre-training implements Fourier feature mechanisms for LLMs to compute arithmetic tasks, such as addition.

May 2024 Start my summer research internship at Meta GenAI!

Nov 2023 Our paper is awarded the best paper at SoCalNLP Symposium 2023! 

Papers

TLDR: Token-Level Detective Reward Model for Large Vision Language Models [paper]

Deqing Fu, Tong Xiao, Rui Wang, Wang Zhu, Pengchuan Zhang, Guan Pang, Robin Jia, Lawrence Chen

ArXiv, 2024

Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models [paper, codes]

Deqing Fu, Tian-Qi Chen, Robin Jia, Vatsal Sharan

Neural Information Processing Systems (NeurIPS), 2024
SoCalNLP Symposium 2023 Best Paper Award. 

Pre-trained Large Language Models Use Fourier Features to Compute Addition [paper]

Tianyi Zhou, Deqing Fu, Vatsal Sharan, Robin Jia

Neural Information Processing Systems (NeurIPS), 2024

IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations [paper, website]

Deqing Fu*, Ghazal Khalighinejad*, Ollie Liu*, Bhuwan Dhingra, Dani Yogatama, Robin Jia, Willie Neiswanger

Conference on Language Modeling (COLM), 2024

*Equal Contribution. Co-first authors ordered alphabetically.

Simplicity Bias of Transformers to Learn Low Sensitivity Functions [paper]

Bhavya Vasudeva*, Deqing Fu*, Tianyi Zhou, Elliot Kau, You-Qi Huang, Vatsal Sharan

Arxiv, 2024

*Equal Contribution.

DeLLMa: Decision Making Under Uncertainty with Large Language Models [paper, codes, website]

Ollie Liu*, Deqing Fu*, Dani Yogatama, Willie Neiswanger

Arxiv, 2024

*Equal Contribution.

DreamSync: Aligning Text-to-Image Generation with Image Understanding Feedback [paper]

Jiao Sun*, Deqing Fu*, Yushi Hu*, Su Wang, Royi Rassin, Da-Cheng Juan, Dana Alon, Charles Herrmann, Sjoerd van Steenkiste, Ranjay Krishna, Cyrus Rashtchian

Arxiv, 2023
*Equal Contribution. Work done while at Google.

SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative Examples [paper, codes]

Deqing Fu, Ameya Godbole, Robin Jia.

Empirical Methods in Natural Lanaguge Processing (EMNLP), 2023


Harnessing the Conditioning Sensorium for Improved Image Translation [paper]

Cooper Nederhood, Nicholas Kolkin, Deqing Fu, Jason Salavon. 

International Conference on Computer Vision (ICCV), 2021

Education

University of Southern California (2022-2027)

Ph.D. in Computer Science

Advisors: Vatsal Sharan & Robin Jia

University of Chicago (2020-2022)

M.S. in Statistics

University of Chicago (2016-2020)

B.S. (with Honors) in Mathematics

B.S. (with Honors) in Computer Science 

with specialization in Machine Learning

B.A. in Statistics 

Experience 

Services

Honors and Awards