Neural methods for natural language generation (NNLG) arrived with much fanfare a few years ago and became the dominant method employed in the E2E NLG Challenge (Dušek et al., 2018). While neural methods promise flexible, end-to-end trainable models, recent studies have revealed their inability to produce satisfactory output for longer or more complex texts (Wiseman et al., 2017) as well as how the black-box nature of these models makes them difficult to control, in contrast to traditional NLG architectures that make use of explicit representations of discourse structure and/or sentence planning operations (Reed et al., 2018). As such, several papers have recently appeared (Puduppully et al., 2019; Moryossef et al., 2019; Balakrishnan et al., 2019, Oraby et al., 2019; Yi et al., 2019) that explore how to incorporate intermediate structures into NNLG or otherwise improve coherence and cohesion.
This workshop aims to encourage further research on enhancing quality in NNLG in terms of discourse coherence and cohesion along with ways to make NNLG models easier to control. Topics covered will include the limits of current end-to-end NNLG with respect to sentence planning and discourse structure; methods for improving discourse coherence and cohesion in NNLG, for example by making better use of discourse connectives, or by avoiding unnecessary repetition; methods for control and interpretability of NNLG, for example by providing more explicit guidance or structure in the input; and better methods for evaluating discourse coherence and cohesion in NNLG.
(10/1) The list of accepted papers is now up!
(10/16) The full schedule is now available!
(10/30) The proceedings for the workshop are now available here. Join us on November 1st!
Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White and Rajen Subba. 2019. Constrained Decoding for Neural NLG from Compositional Representations in Task-Oriented Dialogue. To appear in Proc. ACL-19.
Ondřej Dušek, Jekaterina Novikova and Verena Rieser. 2018. Findings of the E2E NLG Challenge. In Proc. of INLG-18.
Amit Moryossef, Yoav Goldberg and Ido Dagan. 2019. Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation. To appear in Proc. NAACL-19.
Ratish Puduppully, Li Dong, Mirella Lapata. 2019. Data-to-Text Generation with Content Selection and Planning. In Proc. AAAI-19.
Shereen Oraby and Vrindavan Harrison and Marilyn Walker. 2019. Curate and Generate: A Corpus and Method for Joint Control of Semantics and Style in Neural NLG. To appear in Proc. ACL-19.
Lena Reed, Shereen Oraby and Marilyn Walker. 2018. Can Neural Generators for Dialogue Learn Sentence Planning and Discourse Structuring? In Proc. of INLG-18.
Sam Wiseman, Stuart Shieber and Alexander Rush. 2017. Challenges in Data-to-Document Generation. In Proc. of EMNLP-17.
Sanghyun Yi, Rahul Goel, Chandra Khatri, Alessandra Cervone, Tagyoung Chung, Behnam Hedayatnia, Anu Venkatesh, Raefer Gabriel and Dilek Hakkani-Tur. 2019. Towards Coherent and Engaging Spoken Dialog Response Generation Using Automatic Conversation Evaluators.