publications
Full Publication List
* indicates authors with equal contribution. § indicates corresponding author. ‡ indicates the students I co-advised.
Important Pre-Prints
A. Godbole, D. Kavarthapu, R. Das, Z. Gong, A. Singhal, H. Zamani, M. Yu, T. Gao, X. Guo, M. Zaheer, A. McCallum. Multi-step Entity-centric Information Retrieval for Multi-Hop Question Answering. (EMNLP 2019 MRQA Workshop Best paper).
Z. Wang, Y. Zhang, M. Yu, W. Zhang, L. Pan, L. Song, K. Xu, Y. El-Kurdi and R. Florian. Multi-Granular Text Encoding for Self-Explaining Categorization. (ACL 2019 BlackboxNLP Workshop).
W. Yin, K. Kann, M. Yu, H. Schütze. Comparative study of cnn and rnn for natural language processing. ArXiv 2017.
Y. Yu, W. Zhang, K. Hasan, M. Yu, B. Xiang, B. Zhou. End-to-end answer chunk extraction and ranking for reading comprehension. ArXiv 2016.
2024
[ICML] Mo Yu*, Qiujing Wang*‡, Shunchi Zhang*‡, Yisi Sang, Kangsheng Pu, Zekai Wei, Han Wang, Liyan Xu, Jing Li, Yue Yu, Jie Zhou. Few-Shot Character Understanding in Movies as an Assessment to Meta-Learning of Theory-of-Mind.
[NAACL] Che Jiang‡, Biqing Qi, Xiangyu Hong, Dayuan Fu, Yang Cheng, Fandong Meng, Mo Yu§, Bowen Zhou§, Jie Zhou. On Large Language Models’ Hallucination with Regard to Known Facts.
[TACL] Cheng Yang, Guoping Huang, Mo Yu, Zhirui Zhang, Siheng Li, Mingming Yang, Shuming Shi, Yujiu Yang, Lemao Liu. An Energy-based Model for Word-level AutoCompletion in Computer-aided Translation.
2023
[EMNLP] Hongyu Zhao, Kangrui Wang, Mo Yu, Hongyuan Mei. Explicit Planning Helps Language Models in Logical Reasoning. [PDF]
[ACL] Mo Yu*, Jiangnan Li*‡, Shunyu Yao, Wenjie Pang, Xiaochen Zhou, Zhou Xiao, Fandong Meng, Jie Zhou. Personality Understanding of Fictional Characters during Book Reading. [PDF]
[ACL] Sijia Wang, Mo Yu, Lifu Huang. The art of prompting: Event detection based on type specific prompts. [PDF]
[Findings of ACL] Jiangnan Li, Mo Yu, Fandong Meng, Zheng Lin, Peng Fu, Weiping Wang, Jie Zhou. Question-Interlocutor Scope Realized Graph Modeling over Key Utterances for Dialogue Reading Comprehension. [PDF]
[Findings of ACL] Mo Yu*, Yi Gu*‡, Xiaoxiao Guo, Yufei Feng, Xiaodan Zhu, Michael Greenspan, Murray Campbell, Chuang Gan. JECC: Commonsense Reasoning Tasks Derived from Interactive Fictions. (Short version accepted at NeurIPS 2020 workshop. Dataset included in Ernest Davis's list of Datasets and Benchmarks for Commonsense Reasoning.) [PDF]
[ICML] Guanhua Zhang, Jiabao Ji, Yang Zhang, Mo Yu, Tommi Jaakkola, Shiyu Chang. Towards Coherent Image Inpainting Using Denoising Diffusion Implicit Models. [PDF]
[Computer Methods in Applied Mechanics and Engineering] Lu Zhang, Huaiqian You, Tian Gao, Mo Yu, Chung-Hao Lee, Yue Yu. MetaNO: How to Transfer Your Knowledge on Learning Hidden Physics.
2022
[IJCAI-Survey] Yisi Sang‡, Xiangyang Mou‡, Jing Li, Jeffrey Stanton, Mo Yu. A Survey of Machine Narrative Reading Comprehension Assessments.
[TACL] Nouha Dziri, Ehsan Kamalloo, Sivan Milton, Osmar Zaiane, Mo Yu, Edoardo Ponti, Siva Reddy. FaithDial: A Faithful Benchmark for Information-Seeking Dialogue.
[Findings of EMNLP] Yisi Sang*‡, Xiangyang Mou*‡, Mo Yu*§, Dakuo Wang, Jing Li, Jeffrey Stanton. MBTI Personality Prediction for Fictional Characters Using Movie Scripts.
[Findings of EMNLP] Yi Gu‡, Shunyu Yao, Chuang Gan, Josh Tenenbaum, Mo Yu. Revisiting the Roles of "Text” in Text Games.
[NAACL] Yisi Sang*‡, Xiangyang Mou*‡, Mo Yu*§, Shunyu Yao, Jing Li, Jeffrey Stanton. TVShowGuess: Character Comprehension in Stories as Speaker Guessing.
[NAACL] Nouha Dziri, Sivan Milton, Mo Yu, Osmar Zaiane, Siva Reddy. On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?
[NAACL] Pengshan Cai, Hui Wan, Fei Liu, Mo Yu, Hong Yu, Sachindra Joshi. Learning as Conversation: Dialogue Systems Reinforced for Information Acquisition.
[ACL] Ying Xu*, Dakuo Wang*, Mo Yu*, Daniel Ritchie*, Bingsheng Yao*, Tongshuang Wu, Zheng Zhang, Toby Jia-Jun Li, Nora Bradford, Branda Sun, Tran Bao Hoang, Yisi Sang, Yufang Hou, Xiaojuan Ma, Diyi Yang, Nanyun Peng, Zhou Yu, Mark Warschauer. Fantastic Questions and Where to Find Them: FairytaleQA--An Authentic Dataset for Narrative Comprehension.
[ACL] Zhenjie Zhao, Yufang Hou, Dakuo Wang, Mo Yu, Chengzhong Liu, Xiaojuan Ma. Educational Question Generation of Children Storybooks via Question Type Distribution Learning and Event-centric Summarization.
[ACL] Bingsheng Yao, Dakuo Wang, Tongshuang Wu, Zheng Zhang, Toby Jia-Jun Li, Mo Yu§, Ying Xu§. It is AI’s Turn to Ask Human a Question: Question and Answer Pair Generation for Children Storybooks in FairytaleQA Dataset.
[Findings of ACL] Sijia Wang‡, Mo Yu, Shiyu Chang, Lichao Sun, Lifu Huang. Query and Extract: Refining Event Extraction as Type-oriented Binary Decoding.
[ICLR] Shunyu Yao, Mo Yu, Yang Zhang, Karthik R Narasimhan, Joshua B. Tenenbaum, Chuang Gan. Linking Emergent and Natural Languages via Corpus Transfer. (Spotlight).
2021
[TACL] Xiangyang Mou*‡, Chenghao Yang*‡, Mo Yu*§, Bingsheng Yao, Xiaoxiao Guo, Saloni Potdar, Hui Su. Narrative Question Answering with Cutting-Edge Open-Domain QA Techniques: A Comprehensive Study.
[NeurIPS] Mo Yu*, Yang Zhang*, Shiyu Chang*, Tommi S Jaakkola. Understanding Interlocking Dynamics of Cooperative Rationalization.
[ICCV] Zhonghao Wang‡, Kai Wang, Mo Yu§, Jinjun Xiong, Wen-mei Hwu, Mark Hasegawa-Johnson, Honghui Shi. Interpretable Visual Reasoning via Induced Symbolic Space.
[EMNLP] Manling Li‡, Tengfei Ma, Mo Yu, Lingfei Wu, Tian Gao, Heng Ji and Kathleen McKeown. Timeline Summarization based on Event Graph Compression via Time-Aware Optimal Transport.
[NAACL] Lin Pan, Chung-Wei Hang, Haode Qi, Abhishek Shah, Saloni Potdar, Mo Yu. Multilingual BERT Post-Pretraining Alignment.
[NAACL Industrial] Haode Qi, Lin Pan, Atin Sood, Abhishek Shah, Ladislav Kunc, Mo Yu, Saloni Potdar. Benchmarking Commercial Intent Detection Services with Practice-Driven Evaluations.
[ACL] The Deep-Thinking QA Team, IBM Research. Leveraging Abstract Meaning Representation for Knowledge Base Question Answering.
[EACL] Xiangyang Mou‡, Mo Yu, Shiyu Chang, Yufei Feng, Li Zhang, Hui Su. Complementary Evidence Identification in Open-Domain Question Answering.
[CSCW] Liuping Wang, Dakuo Wang, Feng Tian, Zhenhui Peng, Xiangmin Fan, Zhan Zhang, Shuai Ma, Mo Yu, Xiaojuan Ma, Hongan Wang. CASS: Towards Building a Social-Support Chatbot for Online Health Community.
2020
[TKDE] Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Rado Florian, Daniel Gildea. Exploring graph-structured passage representation for multi-hop reading comprehension with graph neural networks.
[AAAI] Xiang Ni, Jing Li, Mo Yu, Wang Zhou, Kun-Lung Wu. Generalizable Resource Allocation in Stream Processing via Deep Reinforcement Learning.
[CVPR] Zhonghao Wang, Mo Yu, Yunchao Wei, Rogerior Feris, Jinjun Xiong, Wen-mei Hwu, Thomas S Huang, Honghui Shi. Differential treatment for stuff and things: A simple unsupervised domain adaptation method for semantic segmentation.
[ICML] Shiyu Chang*, Yang Zhang*, Mo Yu*, Tommi S Jaakkola. Invariant rationalization.
[EMNLP] Xiaoxiao Guo*, Mo Yu*, Yupeng Gao, Chuang Gan, Murray Campbell and Shiyu Chang. Interactive Fiction Game Playing as Multi-Paragraph Reading Comprehension with Reinforcement Learning.
[Findings of EMNLP] Lu Zhang, Mo Yu, Tian Gao and Yue Yu. MCMH: Learning Multi-Chain Multi-Hop Rules for Knowledge Graph Reasoning.
[ISWC] Nandana Mihindukulasooriya, Gaetano Rossiello, Pavan Kapanipathi, Ibrahim Abdelaziz, Srinivas Ravishankar, Mo Yu, Alfio Gliozzo, Salim Roukos and Alexander Gray. Leveraging Semantic Parsing for Relation Linking over Knowledge Bases.
2019
[NAACL] Hong Wang‡, Wenhan Xiong‡, Mo Yu, Xiaoxiao Guo, Shiyu Chang, William Y. Wang. ``Sentence Embedding Alignment for Lifelong Relation Extraction''.
[NAACL] Wenhan Xiong‡, Jiawei Wu, Daren Lei, Mo Yu, Shiyu Chang, Xiaoxiao Guo, William Y. Wang. ``Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing''.
[AAAI] Xiaoxiao Guo, Shiyu Chang, Mo Yu, Gerald Tesauro, Murray Campbell. ``Hybrid Reinforcement Learning with Expert State Sequences''.
[AAAI] X. Wang, P. Kapanipathi, R. Musa, M. Yu, K. Talamadupula, I. Abdelaziz, M. Chang, A. Fokoue, B. Makni, N. Mattei, M. Witbrock. ``Improving Natural Language Inference Using External Knowledge in the Science Questions Domain''.
[ICML] Yue Yu, Jie Chen, Tian Gao, Mo Yu. ``DAG-GNN: DAG Structure Learning with Graph Neural Networks''.
[ACL] Haoyu Wang*, Ming Tan*, Mo Yu*, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo and Saloni Potdar. ``Extracting Multiple-Relations in One-Pass with Pre-Trained Transformers''.
[ACL] Wenhan Xiong‡, Mo Yu, Shiyu Chang, Xiaoxiao Guo and William Y. Wang. ``Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader''.
[ACL] Kun Xu, Liwei Wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang and Dong Yu. ``Cross-lingual Knowledge Graph Alignment via Graph Matching Neural Network''.
[EMNLP] Linfeng Song, Yue Zhang, Daniel Gildea, Mo Yu, Zhiguo Wang and Jinsong Su. ``Leveraging Dependency Forest for Neural Medical Relation Extraction''.
[EMNLP] Mo Yu*, Shiyu Chang*, Yang Zhang* and Tommi Jaakkola. ``Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control''. (My Erdös number becomes 3: Mo Yu → Tommi Jaakkola → Noga Alon → Paul Erdös.)
[EMNLP] Ming Tan, Yang Yu, Haoyu Wang, Dakuo Wang, Saloni Potdar, Shiyu Chang and Mo Yu. ``Out-of-Domain Detection for Low-Resource Text Classification Tasks''.
[NeurIPS] Shiyu Chang*, Yang Zhang*, Mo Yu*, Tommi S Jaakkola. ``A Game Theoretic Approach to Class-wise Selective Rationalization''.
2018
[NAACL] M. Yu*, X. Guo*, J. Yi*, S. Chang, S. Potdar, Y. Cheng, G. Tesauro, H. Wang, B. Zhou. ``Diverse Few-Shot Text Classification with Multiple Metrics''.
[ICLR] S. Wang*, M. Yu*, T. Klinger, W. Zhang, X. Guo, S. Chang, Z. Wang, J. Jiang, G. Tesauro, M. Campbell . ``Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering''.
[AAAI] S. Wang‡, M. Yu, X. Guo, Z. Wang, T. Klinger, W. Zhang, S. Chang, G. Tesauro, B. Zhou. ``R$^ 3$: Reinforced Reader-Ranker for Open-Domain Question Answering''.
[ACL] S. Wang‡, M. Yu, S. Chang, J. Jiang. ``A Co-Matching Model for Multi-choice Reading Comprehension''.
[IJCAI] W. Xiong‡, X. Guo, M. Yu, S. Chang, WY. Wang. ``Scheduled Policy Optimization for Natural Language Communication with Intelligent Agents''.
[CVPR] W. Han, S. Chang, D. Liu, M. Yu, M. Witbrock, TS. Huang. ``Image Super-Resolution via Dual-State Recurrent Networks''.
[EMNLP] W. Xiong‡, M. Yu, S. Chang, X. Guo, WY. Wang. ``One-Shot Relational Learning for Knowledge Graphs''.
[EMNLP] Y. Bao‡, S. Chang, M. Yu, R. Barzilay. ``Deriving Machine Attention from Human Rationales''.
[EMNLP] K. Xu, L. Wu, Z. Wang, M. Yu, L. Chen, V. Sheinin. ``Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model''.
[EMNLP] T. Guo, S. Chang, M. Yu, K. Bai. ``Improving Reinforcement Learning Based Image Captioning with Natural Language Prior''.
2017
[ACL] M. Yu, W. Yin, K. Hasan, C. dos Santos, B. Xiang, B. Zhou. ``Improved Neural Relation Detection for Knowledge Base Question Answering''.
[ICLR] Z. Lin‡, M. Feng, CN. Santos, M. Yu, B. Xiang, B. Zhou, Y. Bengio. ``A structured self-attentive sentence embedding''.
[NIPS] S. Chang, Y. Zhang, W. Han, M. Yu, X. Guo, W. Tan, X. Cui, M. Witbrock, M. Hasegawa-Johnson, TS. Huang. ``Dilated recurrent neural networks''.
[NIPS workshop on meta-learning] Y. Cheng, M. Yu, X. Guo, B. Zhou. ``Few-Shot Learning with Meta Metric Learners''.
2016
[NAACL] M. Yu, M. Dredze, R. Arora, M. Gormley. ``Embedding Lexical Features via Low-rank Tensors''.
[COLING] W. Yin‡, M. Yu, B. Xiang, B. Zhou, H. Schütze. ``Simple question answering by attentive convolutional neural network''.
[EMNLP] G. Kurata, B. Xiang, B. Zhou, M. Yu. ``Leveraging sentencelevel information with encoder LSTM for natural language understanding''.
2015
[TACL] M. Yu, M. Dredze. ``Learning Composition Models for Phrase Embeddings''.
[NAACL] M. Yu, M. Gormley, M. Dredze. ``Combining Word Embeddings and Feature Embeddings for Fine-grained Relation Extraction''.
[EMNLP] M. Gormley*, M. Yu*, M. Dredze. ``Improved Relation Extraction with Feature-rich Compositional Embedding Models''.
[ACL] N. Peng, M. Yu, M. Dredze. ``An empirical study of chinese name matching and applications''.
[NAACL] N. Peng, F. Ferraro, M. Yu, N. Andrews, J. DeYoung, M. Thomas. M. Gormley, T. Wolfe, C. Harman, B. Van Durme, M. Dredze``A Concrete Chinese NLP Pipeline''.
2014
[NIPS] T. Zhao*, M. Yu*, Y. Wang, R. Arora, H. Liu. ``Accelerated Mini-batch Randomized Block Coordinate Descent Method''.
[ACL] M. Yu, M. Dredze. ``Improving Lexical Embeddings with Semantic Knowledge''.
[NIPS workshop on learning semantics] M. Yu, M. Gormley, M. Dredze. ``Improving Lexical Embeddings with Semantic Knowledge''.
2013
[ACL] M. Yu, T. Zhao and Y. Bai, H. Tian, D. Yu. ``Cross-lingual Projections between Languages from different Families''.
[NAACL] M. Yu, T. Zhao, D. Dong, H. Tian, D. Yu. ``Compound Embedding Features for Semi-supervised Learning''.
[IJCAI] M. Yu, T. Zhao, Y. Bai. ``Learning Domain Differences Automatically for Dependency Parsing Adaptation''.
[Journal of Software] M. Yu, T. Zhao, P. Hu. ``A Theoretical Analysis on Structured Learning with Noisy Data and its Applications'' (In Chinese).
Earlier Works
L. Jiang, M. Yu, M. Zhou, X. Liu, T. Zhao. Target-dependent Twitter Sentiment Classification. ACL 2011.
L. Liu, H. Cao, T. Watanabe, T. Zhao, M. Yu, C. Zhu. Locally Training the Log-Linear Model for SMT. EMNLP-CoNLL 2012.
Y. Ren, M. Yu, X. Wang, L. Zhang, W. Ma, Diversifying Landmark Image Search Results by Learning Interested Views from Community Photos. WWW2010 Proceedings
M. Yu, S. Wang, C. Zhu, T. Zhao: Semi-supervised learning for word sense disambiguation using parallel corpora. FSKD 2011: 1490-1494
P. Hu, M. Yu, J. Li, C. Zhu, T. Zhao: Semi-supervised Learning Framework for Cross-Lingual Projection. Web Intelligence/IAT Workshops 2011: 213-216
H. Liang, L. Liu, M. Yu, Y. Liu, P. Hu, T. Li, C. Zhang, H. Cao, T. Zhao. Technique reports of HIT-Machine Intelligence & Translation Lab for CWMT2011 (In Chinese)
Z. Li, H. Li, M. Yu, T. Zhao, S. Li: Event Entailment Extraction Based on EM Iteration. IALP 2010: 101-104
X. Han, M. Yu, C. Zhu, and T. Zhao. A Sequence Kernel Method for Chinese Subcategorization Analysis. Chinese Journal of Electronics. Vol.19, No.3, July 2010.
X. Wang, M. Yu, L. Zhang, W. Ma, Argo: Intelligent Advertising Made Possible from Users’ Photos (demo paper). ACM MM09 Proceedings.
X. Wang, M. Yu, L. Zhang, W. Ma. Advertising based on users’ photos. IEEE ICME 2009 workshop.
X. Wang, M. Yu, L. Zhang, R. Cai, W. Ma. Argo: Intelligent Advertising by Mining a User’s Interest from His Photo Collections. ADKDD’09, June 28, 2009
C. Zhu, M. Yu, T. Zhao. Chinese Word Segmenter Based on Discriminative Classifiers Integration. Journal of Computational Information Systems3:5(2008) 1-7