Title: Towards more controllable language generation: knowledge and planning
Abstract. Existing neural models for natural language generation are still facing many issues: diversity, informativeness, repetition, consistency, coherence, or logic are not well controlled in these models, particularly in open-ended language generation tasks. In this talk, the speaker will discuss the typical settings, issues, and existing solutions in natural language generation. In particular, the speaker will present his recent research on how to use knowledge or planning for better language generation, with the applications of dialog generation, story ending generation, commonsense story generation, and data-to-text generation.
Bio. Dr. Minlie Huang is currently an Associate Professor with the Department of Computer Science and Technology, Tsinghua University. His research interests include natural language processing, particularly in dialog systems, reading comprehension, and sentiment analysis. He has published more than 60 papers in premier conferences and journals (ACL, EMNLP, AAAI, IJCAI, WWW, SIGIR, etc.). He is a nominee of ACL 2019 best demo papers, the recipient of IJCAI 2018 distinguished paper award, CCL 2018 best demo award, NLPCC 2015 best paper award, Hanvon Youngth Innovation Award in 2018, MSRA collaborative research award, and Wuwenjun AI Award in 2019. His work on emotional chatting machines was reported by MIT Technology Review, the Guardian, Nvidia, and many other mass media. He serves as standing reviewer for TACL, area chairs for ACL 2020/2016, EMNLP 2019/2014/2011, and Senior PC members for AAAI 2017-2020 and IJCAI 2017-2020, and reviewers for TASLP, TKDE, TOIS, TPAMI, etc. He was supported by a NSFC key project, several NSFC regular projects, and many IT companies.