Reading List
Following is the my personally interested reading list for individual topics.
Note: this material on this website is tentative and updated regularly.
Following is the my personally interested reading list for individual topics.
Note: this material on this website is tentative and updated regularly.
Tomas Mikolov, Martin Karafiát, Lukás Burget, Jan Cernocký, Sanjeev Khudanpur, 2010. Recurrent neural network based language model. INTERSPEECH.
Tomas Mikolov, Stefan Kombrink, Lukás Burget, Jan Cernocký, Sanjeev Khudanpur, 2011: Extensions of recurrent neural network language model. ICASSP.
Rafal Józefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu, 5: Exploring the Limits of Language Modeling. CoRR abs/1602.02410.
Lehnert, 1981. Plot Units and Narrative Summarization. Cognitive Science.
Goyal, Riloff, Daume III, 2010. Automatically Producing Plot Unit Representations for Narrative Text. EMNLP.
Mooney and DeJong, 1985. Learning schemata for natural language processing. IJCAI.
Chambers and Jurafsky, 2008. Unsupervised Learning of Narrative Event Chains. ACL.
Chambers and Jurafsky, 2009. Unsupervised Learning of Narrative Schemas and their Participants. ACL.
Regneri, Koller, and Pinkal, 2010. Learning script knowledge with web experiments. ACL.
Jans, Bethard, Vuli´c, and Moens, 2012. Skip n-grams and ranking functions for predicting script events. EACL .
Chambers, 2013. Event Schema Induction with a Probabilistic Entity-Driven Model. EMNLP.
Cheung, Poon, and Vanderwende. 2013. Probabilistic frame induction. NAACL-HLT.
Modi, and Titov, 2014. Inducing neural models of script knowledge. CoNLL.
Rudinger, Rastogi, Ferraro, and Van Durme, 2015. Script induction as language modeling. EMNLP (short).
Pichotta, and Mooney, 2014. Statistical script learning with multi-argument events. EACL .
Rudinger, Rastogi, Ferraro, and Van Durme, 2015. Script induction as language modeling. EMNLP .
Pichotta and Mooney, 2016. Learning Statistical Scripts with LSTM Recurrent Neural Networks. AAAI.
Peng and Roth, 2016. Two Discourse Driven Language Models for Semantics. ACL.
Bamman, O'Connor and Smith, 2013. Learning Latent Personas of Film Characters. ACL.
Bamman, Underwood and Smith, 2014. A Bayesian Mixed Effects Model of Literary Character. ACL.
Flekova and Gurevych, 2015. Personality profiling of fictional characters using sense-level links between lexical resources. EMNLP.
Krishnan and Eisenstein, 2015. "You’re Mr. Lebowski, I’m the Dude”: Inducing Address Term Formality in Signed Social Networks. NAACL HLT.
Iyyer, Guha, Chaturvedi, Boyd-Graber, Daume III, 2016. Feuding Families and Former Friends: Unsupervised Learning for Dynamic Fictional Relationships. NAACL.
Chaturvedi, Iyyer, Daumé III, 2017. Unsupervised Learning of Evolving Relationships Between Literary Characters. AAAI 2017.
Harrison and Riedl, 2016. Learning From Stories: Using Crowdsourced Narratives to Train Virtual Agents. AAAI.
Riedl and Harrison, 2016. Using Stories to Teach Human Values to Artificial Agents. Workshop on AI, Ethics and Society.
Mueller, 2004. Understanding script-based stories using commonsense reasoning. Cognitive Systems Research.
Underwood, Bamman, and Lee, 2018. The Transformation of Gender in English-Language Fiction. Cultural Analytics
Elson, 2012: Detecting story analogies from annotations of time, action and agency. LREC Workshop on Computational Models of Narrative.
Chaturvedi, Srivastava, and Roth, 2018. Where Have I Heard This Story Before? Identifying Narrative Similarity in Movie Remakes.NAACL-HLT (short).
Roemmele and Gordon, 2018. An Encoder-decoder Approach to Predicting Causal Relations in Stories. NAACL Workshop on Storytelling.
Cai, Tu, and Gimpel, 2017. Pay Attention to the Ending: Strong Neural Baselines for the ROC Story Cloze Task. ACL (short).
Chaturvedi, Peng, and Roth, 2017. Story Comprehension for Predicting What Happens Next. EMNLP 2017.
Radford, Narasimhan, Salimans, Sutskever. Improving Language Understanding by Generative Pre-Training.
Chen, Chen, and Yu, 2018. Incorporating Structured Commonsense Knowledge in Story Completion. arXiv.
Li, Li, Wei, Gu, Jatowt, and Yang, 2018. A Multi-Attention based Neural Network with External Knowledge for Story Ending Predicting Task. CoLing.
McIntyre, and Lapata, 2009. Learning to tell tales: A data-driven approach to story generation. ACL.
McIntyre, and Lapata, 2010. Plot induction and evolutionary search for story generation. ACL.
Rishes, Lukin, Elson, and Walker, 2013. Generating different story tellings from semantic representations of narrative. ICIDS.
Lukin, Reed, and Walker, 2015. Generating sentence planning variations for story telling. SIGDIAL.
Harrison, Purdy, and Riedl, 2017. Toward Automated Story Generation with Markov Chain Monte Carlo Methods and Deep Neural Networks. Workshop on Intelligent Narrative Technologies.
Martin, Ammanabrolu, Wang, Hancock, Singh, Harrison, and Riedl, 2018. Event Representations for Automated Story Generation with Deep Neural Nets. AAAI.
Clark, Ji, and Smith, 2018. Neural Text Generation in Stories using Entity Representations as Context. NAACL.
Lewis, Dauphin, Fan, 2018. Hierarchical Neural Story Generation. ACL.
Purdy, Wang, He, and Riedl, 2018. Predicting Generated Story Quality with Quantitative Measures. AIIDE.
Xu , Ren , Zhang , Zeng , Cai, and Sun, 2018. A Skeleton-Based Model for Promoting Coherence Among Sentences in Narrative Story Generation. EMNLP.
Yao, Peng, Weischedel, Knight, Zhao and Yan, 2019. Plan-And-Write: Towards Better Automatic Storytelling. AAAI.
Venugopalan, Xu, Donahue, Rohrbach, Mooney, and Saenko, 2015. Translating videos to natural language using deep recurrent neural networks. NAACL .
Iyyer, Manjunatha, Guha, Vyas, Boyd-Graber, Daumé III, and Davis, 2017. The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives. CVPR.
Gillick and Bamman, 2018. Telling Stories with Soundtracks: An Empirical Analysis of Music in Film. NAACL.