Dian Yu 于典
NLP Researcher
Tencent AI Lab
E-mail: yudiandoris (AT) gmail (DOT) com
Research Interests
Large Language Models, Natural Language Processing,Information Extraction, Machine Reading Comprehension, and Dialogue Understanding
Selected Preprints
Yi Su, Dian Yu, Linfeng Song, Juntao Li, Haitao Mi, Zhaopeng Tu, Min Zhang, Dong Yu. Expanding RL with Verifiable Rewards Across Diverse Domains. [paper] [resource]
Yulai Zhao*, Haolin Liu*, Dian Yu, S.Y. Kung, Haitao Mi, Dong Yu. One Token to Fool LLM-as-a-Judge. [paper]
Tao Ge, Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, and Dong Yu. Scaling Synthetic Data Creation with 1,000,000,000 Personas. [paper] [resource]
Yue Wang, Qiuzhi Liu, Jiahao Xu, Tian Liang, Xingyu Chen, Zhiwei He, Linfeng Song, Dian Yu, Juntao Li, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, and Dong Yu. Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs. [paper]
Selected Publications
Yuheng Zhang, Dian Yu, Baolin Peng, Linfeng Song, Ye Tian, Mingyue Huo, Nan Jiang, Haitao Mi, Dong Yu. Iterative Nash Policy Optimization: Aligning LLMs with General Preferences via No-Regret Learning (ICLR 2025) (Oral). [paper]
Murong Yue, Wenlin Yao, Haitao Mi, Dian Yu, Ziyu Yao, Dong Yu. DOTS: Learning to Reason Dynamically in LLMs via Optimal Reasoning Trajectories Search (ICLR 2025). [paper]
Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, and Dong Yu. Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs (ICML 2025). [paper]
Ye Tian, Baolin Peng, Linfeng Song, Lifeng Jin, Dian Yu, Lei Han, Haitao Mi, Dong Yu. Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing (NeurIPS 2024). [paper]
Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, Jianshu Chen. Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models (EMNLP 2024 findings). [paper]
Zhihan Zhang, Tao Ge, Zhenwen Liang, Wenhao Yu, Dian Yu, Mengzhao Jia, Dong Yu, and Meng Jiang. Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning (EMNLP 2024). [paper] [code]
Zhenwen Liang, Dian Yu, Xiaoman Pan, Wenlin Yao, Qingkai Zeng, Xiangliang Zhang, and Dong Yu. MinT: Boosting Generalization in Mathematical Reasoning via Multi-View Fine-Tuning (LREC-COLING 2024). [paper] [code]
Dian Yu, Xiaoyang Wang, Wanshun Chen, Nan Du, Longyue Wang, Haitao Mi, and Dong Yu. More Than Spoken Words: Nonverbal Message Extraction and Generation (EMNLP 2023).
Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, Zhaopeng Tu. Document-Level Machine Translation with Large Language Models (EMNLP 2023).
Xiaoman Pan, Wenlin Yao, Hongming Zhang, Dian Yu, Dong Yu, Jianshu Chen. Knowledge-in-context: Towards knowledgeable semi-parametric language models (ICLR 2023) (spotlight).
Dian Yu, Ben Zhou, and Dong Yu. End-to-End Chinese Speaker Identification (NAACL 2022) (Oral). [paper] [code]
Kai Sun*, Dian Yu*, Jianshu Chen, Dong Yu, and Claire Cardie. Improving Machine Reading Comprehension with Contextualized Commonsense Knowledge (ACL 2022). [paper] [code]
Dian Yu, Kai Sun, Dong Yu, and Claire Cardie. Self-Teaching Machines to Read and Comprehend with Large-Scale Multi-Subject Question-Answering Data (EMNLP 2021 findings). [paper] [code]
Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. CLUE: A Chinese Language Understanding Evaluation Benchmark (COLING 2020). [paper][code]
Dian Yu*, Kai Sun*, Claire Cardie, and Dong Yu. Dialogue-Based Relation Extraction. (ACL 2020). [paper] [code]
HongyuGong, Yelong Shen, Dian Yu, Jianshu Chen, and Dong Yu. 2020. Recurrent Chunking Mechanisms for Long-Text Machine Reading Comprehension (ACL 2020). [paper] [code]
Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2020. Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension (TACL). [paper] [code]
Yue Cao, Xiaojun Wan, Jin-ge Yao, and Dian Yu. MultiSumm: Towards a Unified Model for Multi-Lingual Abstractive Summarization (AAAI 2020).
Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, Dong Yu, David McAllester, and Dan Roth. 2019. Evidence Sentence Extraction for Machine Reading Comprehension (CoNLL 2019). [paper] [resource]
Kai Sun, Dian Yu, Dong Yu and Claire Cardie. 2019. Improving Machine Reading Comprehension with General Reading Strategies (NAACL-HLT 2019) (Oral). [code]
Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, Claire Cardie. 2019. DREAM: A Challenge Dataset and Models for Dialogue-Based Reading Comprehension (TACL 2019). [dataset]
Dian Yu. 2017. Unsupervised Graph-Based Relation Extraction and Validation for Knowledge Base Population. PhD Dissertation. Rensselaer Polytechnic Institute.
Dian Yu, Lifu Huang, and Heng Ji. 2017. Open Relation Extraction and Grounding (IJCNLP 2017) (Oral).
Dian Yu, Heng Ji. 2016. Unsupervised Person Slot Filling based on Graph Mining (ACL 2016) (Oral).
Shi Zhi, Bo Zhao, Wenzhu Tong, Jing Gao, Dian Yu, Heng Ji and Jiawei Han. 2015. Modeling Truth Existence in Truth Discovery (KDD 2015).
Dian Yu, Yulia Tyshchuk, Heng Ji and William Wallace. 2015. Detecting Deceptive Groups Using Conversations and Network Analysis (ACL-IJCNLP 2015). [games]
Dian Yu, Heng Ji, Sujian Li and Chin-Yew Lin. 2015. Why Read if You can Scan: Scoping Strategy for Biographical Fact Extraction (NAACL-HLT 2015) (short). [triggers]
Dian Yu, Hongzhao Huang, Taylor Cassidy, Heng Ji, Chi Wang, Shi Zhi, Jiawei Han, Clare Voss and Malik Magdon-Ismail. 2014. The Wisdom of Minority: Unsupervised Slot Filling Validation based on Multi-dimensional Truth-Finding (COLING 2014) (Oral).
Hongzhao Huang, Zhen Wen, Dian Yu, Heng Ji, Yizhou Sun, Jiawei Han and He Li. 2013. Resolving Entity Morphs in Censored Data (ACL 2013).
Professional Services
Program Committee:
ACL (2017-2021), NAACL-HLT (2016, 2018, 2019), COLING (2020), EMNLP (2018-2020), AAAI (2019, 2020) , EACL (2021) , ICASSP (2022)
Journal:
NLE (2019, 2021), JAIR (2018, 2019), TASLP (2019)
Senior Area Chair:
AACL-IJCNLP (2022): Question Answering
Junior Area Chair/Action Editor/Meta-Reviewer:
NAACL-HLT (2021): Information Extraction, EMNLP (2021): Information Extraction, ACL (2022),
ICASSP (2023, 2024), LREC-COLING (2024): Information Extraction, LREC (2026): Information Extraction
ARR (2024-2025)
Education
09/2013-09/2017 Ph.D. in Computer Science, Rensselaer Polytechnic Institute (Advisor: Prof. Heng Ji)
09/2012-07/2013 Ph.D. in Computer Science, The Graduate Center, CUNY (Advisor: Prof. Heng Ji)
09/2008-07/2012 B.Eng. in Communication Engineering, Beijing University of Posts and Telecommunications
Work Experience
Tencent AI Lab, Bellevue, WA
Senior researcher Nov. 2017 - present
Bosch Research, Palo Alto, CA
Research intern May 2015 - Aug. 2015
Mentor: Dr. Lin Zhao, Dr. Kui Xu
Knowledge Mining Group, Microsoft Research Asia, Beijing, China
Research intern Jun. 2014 - Sep. 2014
Mentor: Dr. Chin-Yew Lin
Language Computing & Web Mining Group, Peking University, Beijing, China
Undergraduate research intern Aug. 2011 - July 2012
Mentor: Prof. Xiaojun Wan