TextGraphs 2020

14th Workshop on Graph-Based

Natural Language Processing

Workshop at COLING 2020

Barcelona, Spain (Online)

December 13, 2020


The workshops in the TextGraphs series have published and promoted the synergy between the field of Graph Theory and Natural Language Processing. Besides traditional NLP applications like word sense disambiguation and semantic role labeling, and information extraction graph-based solutions nowadays also target new web-scale applications like information propagation in social networks, rumor proliferation, e-reputation, language dynamics learning, and future events prediction, to name a few.

Previous editions of the series can be found here.


Danai Koutra (http://web.eecs.umich.edu/~dkoutra/), Assistant Professor at University of Michigan, Ann Arbor

Abstract. Little is known about the trustworthiness of predictions made by knowledge graph embedding (KGE) models. In this talk, I will first present our recent work on investigating the calibration of KGE models, or the extent to which they output confidence scores that reflect the expected correctness of predicted knowledge graph triples. Going beyond the standard closed-world assumption, I will introduce the more realistic but challenging open-world assumption, in which unobserved predictions are not considered true or false until ground-truth labels are obtained, and I will discuss the effectiveness of calibration techniques under this setting. I will also present a case study of human-AI collaboration, showing that calibrated predictions can improve human performance in a knowledge graph completion task. Second, to address the need for a solid benchmark in knowledge graph completion and provide a boost to research on this task, I will present CoDEx, a set of knowledge graph completion datasets extracted from Wikidata and Wikipedia that improve upon existing Freebase-based knowledge graph completion benchmarks in scope and level of difficulty.

Bio. Danai Koutra is a Morris Wellman Faculty Development Assistant Professor in Computer Science and Engineering at the University of Michigan, where she leads the Graph Exploration and Mining at Scale (GEMS) Lab. Her research focuses on practical and scalable methods for large-scale real networks, and has applications in neuroscience, organizational analytics, and social sciences. She won the SIGKDD Rising Star Award, a Facebook and a Google Faculty Award in 2020, an NSF CAREER award, an Amazon Research Faculty Award, and a Precision Health Investigator award in 2019, an ARO Young Investigator award and an Adobe Data Science Research Faculty Award in 2018, the 2016 ACM SIGKDD Dissertation award, and an honorable mention for the SCS Doctoral Dissertation Award (CMU). She has multiple papers in top data mining conferences, including 8 award-winning papers. Among other service roles, she is the Program Director of the SIAG on Data Mining and Analytics, and an Associate Editor of ACM TKDD. She earned her Ph.D. and M.S. in Computer Science from CMU in 2015 and her diploma in Electrical and Computer Engineering at the National Technical University of Athens in 2010.

Yizhou Sun (http://web.cs.ucla.edu/~yzsun/index.html), Associate Professor at Computer Science, UCLA

Abstract. Heterogeneous information networks (HINs) are graphs containing different types of objects and different types relations, which has broad applications ranging from knowledge graphs to medical networks and recommendation systems. In this talk, we present two recent developments for representation learning on HINs. First, in order to handle different types of objects and relations in HINs, we propose Heterogeneous Graph Transformer (HGT) to model meta-relation-based message passing and attention mechanism. HGT significantly enhances the performance of different tasks on different datasets, and achieves the first place on Open Graph Benchmark leaderboard. Second, we propose GPT-GNN, which is a generative pretraining model for graph neural networks, to transfer knowledge from unlabeled data via generative self-supervised tasks to downstream tasks with only a few labels. Comprehensive experiments on the billion-scale Open Academic Graph and Amazon recommendation data demonstrate that GPT-GNN significantly outperforms state-of-the-art GNN models with-out pre-training by up to 9.1% across various downstream tasks.

Bio. Yizhou Sun is an associate professor at department of computer science of UCLA. She received her Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 2012. Her principal research interest is on mining graphs/networks, and more generally in data mining, machine learning, and network science, with a focus on modeling novel problems and proposing scalable algorithms for large-scale, real-world applications. She is a pioneer researcher in mining heterogeneous information network, with a recent focus on deep learning on graphs/networks. Yizhou has over 100 publications in books, journals, and major conferences. Tutorials of her research have been given in many premier conferences. She received 2012 ACM SIGKDD Best Student Paper Award, 2013 ACM SIGKDD Doctoral Dissertation Award, 2013 Yahoo ACE (Academic Career Enhancement) Award, 2015 NSF CAREER Award, 2016 CS@ILLINOIS Distinguished Educator Award, 2018 Amazon Research Award, and 2019 Okawa Foundation Research Grant.

Sujith Ravi (http://www.sravi.org), Director at Amazon

Abstract. Advances in deep learning have enabled us to build intelligent systems capable of perceiving and understanding the real world from text, speech and images. Yet, building real-world, scalable intelligent systems from “scratch” remains a daunting challenge as it requires us to deal with ambiguity, data sparsity and solve complex language & visual, dialog and generation problems. In this talk, I will present powerful neural structured learning frameworks, pre-cursor to widely-popular GNNs, that tackle the above challenges by leveraging the power of deep learning combined with graphs which allow us to model the structure inherent in language and visual data. We use graph-based machine learning as a computing mechanism to design efficient algorithms and address these challenges. Our neural graph learning approach handles massive graphs with billions of vertices and trillions of edges and has been successfully used to power real-world applications at industry scale for response generation, image recognition and multimodal experiences. I will highlight our work on using neural graph learning with a novel class of attention mechanisms over Euclidean and Hyperbolic spaces to model complex patterns in Knowledge Graphs for learning entity relationships, predicting missing facts and performing multi-hop reasoning. Finally, I will describe recent work on leveraging graphs for multi-document news summarization.

Bio. Dr. Sujith Ravi is a Director at Amazon Alexa AI where he is leading efforts to build the future of multimodal conversational AI experiences at scale. Prior to that, he was leading and managing multiple ML and NLP teams and efforts in Google AI. He founded and headed Google’s large-scale graph-based semi-supervised learning platform, deep learning platform for structured and unstructured data as well as on-device machine learning efforts for products used by billions of people in Search, Ads, Assistant, Gmail, Photos, Android, Cloud and YouTube. These technologies power conversational AI (e.g., Smart Reply), Web and Image Search; On-Device predictions in Android and Assistant; and ML platforms like Neural Structured Learning in TensorFlow, Learn2Compress as Google Cloud service, TensorFlow Lite for edge devices.

Dr. Ravi has authored over 100 scientific publications and patents in top-tier machine learning and natural language processing conferences. His work has been featured in press: Wired, Forbes, Forrester, New York Times, TechCrunch, VentureBeat, Engadget, New Scientist, among others, and also won the SIGDIAL Best Paper Award in 2019 and ACM SIGKDD Best Research Paper Award in 2014. For multiple years, he was a mentor for Google Launchpad startups. Dr. Ravi was the Co-Chair (AI and deep learning) for the 2019 National Academy of Engineering (NAE) Frontiers of Engineering symposium. He was also the Co-Chair for EMNLP 2020, ICML 2019, NAACL 2019, and NeurIPS 2018 ML workshops and regularly serves as Senior/Area Chair and PC of top-tier machine learning and natural language processing conferences like NeurIPS, ICML, ACL, NAACL, AAAI, EMNLP, COLING, KDD, and WSDM.


14:00–14:10 Opening Session

14:10–15:00 Invited Talk by Danai Koutra (University of Michigan, Ann Arbor, USA)

To Trust or Not To Trust? Evaluation Methodology and Benchmarks for Embedding-based Knowledge Graph Completion

15:00–15:10 Break

15:10–16:00 Invited Talk by Yizhou Sun (University of California, Los Angeles, USA)

Graph Neural Networks for Heterogeneous Information Networks

16:00–16:30 Oral Presentations Session 1

A survey of embedding models of entities and relationships for knowledge graph completion

Dat Quoc Nguyen

Graph-based Aspect Representation Learning for Entity Resolution

Zhenqi Zhao, Yuchen Guo, Dingxian Wang, Yufan Huang, Xiangnan He, Bin Gu

Merge and Recognize: A Geometry and 2D Context Aware Graph Model for Named Entity Recognition from Visual Documents

Chuwei Luo, Yongpan Wang, Qi Zheng, Liangchen Li, Feiyu Gao, Shiyu Zhang

Joint Learning of the Graph and the Data Representation for Graph-Based Semi-Supervised Learning

Mariana Vargas-Vieyra, Aurélien Bellet, Pascal Denis

Contextual BERT: Conditioning the Language Model Using a Global State

Timo I. Denk and Ana Peleteiro Ramallo

Graph-to-Graph Transformer for Transition-based Dependency Parsing

Alireza Mohammadshahi and James Henderson

16:30–16:40 Break

16:40–17:30 Invited Talk by Sujith Ravi (Amazon Alexa AI, USA)

Neural Graph Computing at Scale

17:30–18:00 Oral Presentations Session 2

Semi-supervised Word Sense Disambiguation Using Example Similarity Graph

Rie Yatabe and Minoru Sasaki

Incorporating Temporal Information in Entailment Graph Mining

Liane Guillou, Sander Bijl de Vroe, Mohammad Javad Hosseini, Mark Johnson, Mark Steedman

Graph-based Syntactic Word Embeddings

Ragheb Al-Ghezi and Mikko Kurimo

Relation Specific Transformations for Open World Knowledge Graph Completion

Haseeb Shah, Johannes Villmow, Adrian Ulges

TextGraphs 2020 Shared Task on Multi-Hop Inference for Explanation Regeneration

Peter Jansen and Dmitry Ustalov

18:00–18:10 Break

18:10–18:50 Poster Session

PGL at TextGraphs 2020 Shared Task: Explanation Regeneration using Language and Graph Learning Methods

Weibin Li, Yuxiang Lu, Zhengjie Huang, Weiyue Su, Jiaxiang Liu, Shikun Feng, Yu Sun

ChiSquareX at TextGraphs 2020 Shared Task: Leveraging Pretrained Language Models for Explanation Regeneration

Aditya Girish Pawate, Varun Madhavan, Devansh Chandak

Explanation Regeneration via Multi-Hop ILP Inference over Knowledge Base

Aayushee Gupta and Gopalakrishnan Srinivasaraghavan

Red Dragon AI at TextGraphs 2020 Shared Task : LIT : LSTM-Interleaved Transformer for Multi-Hop Explanation Ranking

Yew Ken Chia, Sam Witteveen, Martin Andrews

Autoregressive Reasoning over Chains of Facts with Transformers

Ruben Cartuyvels, Graham Spinks, Marie-Francine Moens

18:50–19:00 Closing Remarks

Timezone is CET (UTC+01:00).


TextGraphs-14 has been successfully accepted as a single-day event at COLING (https://coling2020.org) in Barcelona, Spain! Due to the COVID-19 situation, everything will be organized online.


We invite submissions of up to nine (9) pages maximum, plus bibliography for long papers and four (4) pages, plus bibliography, for short papers.

The COLING’2020 templates must be used; these are provided in LaTeX and also Microsoft Word format. Submissions will only be accepted in PDF format. Deviations from the provided templates will result in rejections without review. Download the Word and LaTeX templates here: https://coling2020.org/coling2020.zip

Submit papers by the end of the deadline day (timezone is UTC-12) via our Softconf Submission Site: https://www.softconf.com/coling2020/TextGraphs/


  • Workshop papers deadline: Oct 2, 2020

  • Notification of acceptance: Oct 25, 2020

  • Camera-ready papers deadline: Nov 1, 2020

  • Workshop date: Dec 13, 2020


TextGraphs-14 invites submissions on (but not limited to) the following topics:

  • Graph-based and graph-supported machine learning and deep learning methods

      • Graph embeddings

      • Graph-based and graph-supported deep learning (e.g., graph-based recurrent and recursive networks)

      • Probabilistic graphical models and structure learning methods

      • Graph-based methods for reasoning and interpreting deep neural networks

      • Exploration of capabilities and limitations of graph-based methods being applied to neural networks,

      • Investigation of aspects of neural networks that are (not) susceptible to graph-based analysis

  • Graph-based methods for Information Retrieval, Information Extraction, and Text Mining

      • Graph-based methods for word sense disambiguation,

      • Graph-based representations for ontology learning,

      • Graph-based strategies for semantic relation identification,

      • Encoding semantic distances in graphs,

      • Graph-based techniques for text summarization, simplification, and paraphrasing

      • Graph-based techniques for document navigation and visualization,

      • Reranking with graphs,

      • Applications of label propagation algorithms, etc.

  • New graph-based methods for NLP applications

      • Random walk methods in graphs

      • Spectral graph clustering

      • Semi-supervised graph-based methods

      • Methods and analyses for statistical networks

      • Small world graphs

      • Dynamic graph representations

      • Topological and pretopological analysis of graphs

      • Graph kernels

  • Graph-based methods for applications on social networks

      • Rumor proliferation

      • E-reputation

      • Multiple identity detection

      • Language dynamics studies

      • Surveillance systems

  • Graph-based methods for NLP and Semantic Web

      • Representation learning methods for knowledge graphs (i.e., knowledge graph embedding)

      • Using graphs-based methods to populate ontologies using textual data

      • Inducing knowledge of ontologies into NLP applications using graphs

      • Merging ontologies with graph-based methods using NLP techniques


We are organizing a shared task before the workshop. Our shared task on Explanation Regeneration asks participants to develop methods to reconstruct gold explanations for elementary science questions, using a new corpus of gold explanations that provides supervision and instrumentation for this multi-hop inference task. Each explanation is represented as an “explanation graph”, a set of atomic facts (between 1 and 16 per explanation, drawn from a knowledge base of 5,000 facts) that, together, form a detailed explanation for the reasoning required to answer and explain the resoning behind a question. Linking these facts to achieve strong performance at rebuilding the gold explanation graphs will require methods to perform multi-hop inference. The explanations include both core scientific facts as well as detailed world knowledge, allowing this task to appeal to those interested in both multi-hop reasoning and common-sense inference.

More information about the task held in TextGraphs 2020 can be found here:

We welcome papers on the workshop topics even if you do not participate in the shared task.


Please direct all questions and inquiries to our official e-mail address (textgraphsOC@gmail.com) or contact any of the organizers via their individual emails.

Connect with us on social media:

● Join us on Facebook: https://www.facebook.com/groups/900711756665369/

● Follow us on Twitter: https://twitter.com/textgraphs

● Join us on LinkedIn: https://www.linkedin.com/groups/4882867


Željko Agić, Unity Technologies, Denmark

Ilseyar Alimova, Kazan Federal University, Russian Federation

Prithviraj Ammanabrolu, Georgia Institute of Technology, USA

Martin Andrews, Red Dragon AI, Singapore

Amir Bakarov, Higher School of Economics, Russian Federation

Tomáš Brychcín, University of West Bohemia, Czech Republic

Ruben Cartuyvels, Catholic University of Leuven, Belgium

Flavio Massimiliano Cecchini, Università Cattolica del Sacro Cuore, Italy

Tanmoy Chakraborty, Indraprastha Institute of Information Technology Delhi (IIIT-D), India

Chen Chen, Magagon Labs, USA

Monojit Choudhury, Microsoft Research, India

Alexandre Duval, Paris-Saclay University, France

Jennifer D'Souza, TIB Leibniz Information Centre for Science and Technology, Germany

Stefano Faralli, University of Rome Unitelma Sapienza, Italy

Goran Glavaš, University of Mannheim, Germany

Carlos Gómez-Rodríguez, Universidade da Coruña, Spain

Natalia Grabar, Université de Lille, France

Aayushee Gupta, IIIT Bangalore, India

Binod Gyawali, Educational Testing Service, USA

Tomáš Hercig, University of West Bohemia, Czech Republic

Dmitry Ilvovsky, Higher School of Economics, Russian Federation

Ming Jiang, University of Illinois at Urbana-Champaign, USA

Sammy Khalife, École Polytechnique, France

Andrey Kutuzov, University of Oslo, Norway

Anne Lauscher, University of Mannheim, Germany

Weibin Li, Baidu, China

Valentin Malykh, Huawei Noah's Ark Lab / Kazan Federal University, Russian Federation

Gabor Melli, OpenGov, USA

Clayton Morrison, University of Arizona, USA

Animesh Mukherjee, IIT Kharagpur, India

Matthew Mulholland, Educational Testing Service, USA

Giannis Nikolentzos, École Polytechnique, France

Enrique Noriega-Atala, University of Arizona, USA

Damien Nouvel, Inalco ERTIM, France

Aditya Girish Pawate, IIT Kharagpur, India

Jan Wira Gotama Putra, Tokyo Institute of Technology, Japan

Zimeng Qiu, Amazon Alexa AI, USA

Steffen Remus, University of Hamburg, Germany

Leonardo F. R. Ribeiro, TU Darmstadt, Germany

Brian Riordan, Educational Testing Service, USA

Viktor Schlegel, University of Manchester, UK

Natalie Schluter, IT University of Copenhagen, Denmark

Robert Schwarzenberg, German Research Center For Artificial Intelligence (DFKI), Germany

Rebecca Sharp, University of Arizona, USA

Artem Shelmanov, Skolkovo Institute of Science and Technology, Russian Federation

Khalil Simaan, University of Amsterdam, The Netherlands

Konstantinos Skianis, BLUAI, Greece

Saatviga Sudhahar, Healx, UK

Mihai Surdeanu, University of Arizona, USA

Yuki Tagawa, Fuji Xerox Co., Ltd., Japan

Mokanarangan Thayaparan, University of Manchester, UK

Antoine Tixier, École Polytechnique, France

Nicolas Turenne, BNU HKBU United International College (UIC), China

Elena Tutubalina, Insilico Medicine, Russian Federation

Vaibhav Vaibhav, Apple, USA

Serena Villata, Université Côte d’Azur, CNRS, Inria, I3S, France

Xiang Zhao, National University of Defense Technology, China


Dmitry Ustalov, Yandex, Russia

Swapna Somasundaran, Educational Testing Service, USA

Alexander Panchenko, Skolkovo Institute of Science and Technology (Skoltech), Russia

Fragkiskos D. Malliaros, CentraleSupélec, Paris-Saclay University, France

Ioana Hulpuș, Data and Web Science Group, University of Mannheim, Germany

Peter Jansen, School of Information, University of Arizona, USA

Abhik Jana, University of Hamburg, Germany