We are happy to inform that the Keynote talk will be given by William W. Cohen.
William Cohen is a Principal Scientist at Google, and is based in Google's Pittsburgh office. He received his bachelor's degree in Computer Science from Duke University in 1984, and a PhD in Computer Science from Rutgers University in 1990. From 1990 to 2000 Dr. Cohen worked at AT&T Bell Labs and later AT&T Labs-Research, and from April 2000 to May 2002 Dr. Cohen worked at Whizbang Labs, a company specializing in extracting information from the web. From 2002 to 2018, Dr. Cohen worked at Carnegie Mellon University in the Machine Learning Department, with a joint appointment in the Language Technology Institute, as an Associate Research Professor, a Research Professor, and a Professor. Dr. Cohen also was the Director of the Undergraduate Minor in Machine Learning at CMU and co-Director of the Master of Science in ML Program.
Dr. Cohen is a past president of the International Machine Learning Society. In the past he has also served as an action editor for the the AI and Machine Learning series of books published by Morgan Claypool, for the journal Machine Learning, the journal Artificial Intelligence, the Journal of Machine Learning Research, and the Journal of Artificial Intelligence Research. He was General Chair for the 2008 International Machine Learning Conference, held July 6-9 at the University of Helsinki, in Finland; Program Co-Chair of the 2006 International Machine Learning Conference; and Co-Chair of the 1994 International Machine Learning Conference. Dr. Cohen was also the co-Chair for the 3rd Int'l AAAI Conference on Weblogs and Social Media, which was held May 17-20, 2009 in San Jose, and was the co-Program Chair for the 4rd Int'l AAAI Conference on Weblogs and Social Media. He is a AAAI Fellow, and was a winner of the 2008 the SIGMOD "Test of Time" Award for the most influential SIGMOD paper of 1998, and the 2014 SIGIR "Test of Time" Award for the most influential SIGIR paper of 2002-2004.
Dr. Cohen's research interests include information integration and machine learning, particularly information extraction, text categorization and learning from large datasets. He has a long-standing interest in statistical relational learning and learning models, or learning from data, that display non-trivial structure. He holds seven patents related to learning, discovery, information retrieval, and data integration, and is the author of more than 200 publications.
Knowledge-graph aware language models
Neural language models, which can be pretrained on very large corpora, turn out to "know" a lot about the world, in the sense that they can be trained to answer questions surprisingly reliably. However, "language models as knowledge graphs" have many disadvantages: for example, they cannot be easily updated when information changes. I will describe recent work in my team and elsewhere on incorporating symbolic knowledge into language models and question-answering systems, and also comment on some of the remaining challenges associated with integrating symbolic KG-like reasoning and neural NLP.
We are also happy to host Ivan Titov as an invited speaker of our workshop.
Ivan Titov is Associate Professor (Reader / UHD) at the Universities of Edinburgh and Amsterdam. His current research focuses on natural language understanding (including semantic parsing, question answering and information extraction), natural language generation (text summarization and machine translation) and generally on ML for NLP (latent variable models, structured prediction, interpreting neural models). He has been awarded an ERC starting grant, Dutch VIDI fellowship and Google faculty awards. He was a program chair for CoNLL 2018, has been an action editor at TACL and JMLR, a member of the advisory board of the European chapter of ACL and a program co-chair for the upcoming ICLR 2021.
Integrating Knowledge with Graph Neural Networks and Uncovering their Decision Process
It is natural to represent prior knowledge as graphs (e.g., representing knowledge bases or linguistic structures), and graph neural networks (GNNs) provide a flexible framework for incorporating them into NLP models. I will discuss how GNNs can be applied to NLP problems, their strengths, and their limitations. One of the limitations of GNNs is their perceived lack of interpretability. I will show how we can extract edges and paths which a model relies on when making a prediction (differentiable graph masking, GraphMask), uncovering its decision process. If time permits, I will also touch on work on running GNNs on latent graphs (i.e. inducing graphs at the same time) and characterizing GNNs' expressive power. Joint work with Michael Schlichtkrull, Nicola De Cao, Jasmijn Bastings, Diego Marcheggiani, Wilker Aziz as well as other colleagues at the Uni Edinburgh and Amsterdam.