Neural + Symbolic Representation & Reasoning

Description

The rapid progress in artificial intelligence over the past decade has been driven by two fundamental forces: massive symbolic knowledge resources (such as Freebase, WordNet, DBPedia, Wikidata, ConceptNet, NELL) and a renaissance of neural computation techniques (attentional mechanisms, distributional semantics, BiLSTMs, Transformers). The core problems of representation and reasoning can now be viewed from two complementary perspectives: that of symbolic or language-grounded representations or from that of continuous, vector-spaces used in neural methods. Many of the major breakthroughs in knowledge base construction will be at the confluence of these two research streams. The goal of this workshop is to bring together researchers at the frontier of each of these fields and build new, successful collaborations for neuro-symbolic methods.

Schedule

1:30 - 1:35p Opening Remarks

1:35 - 2:10p Matt Gardner - Reasoning Our Way to Reading [slides]

2:10 - 2:35p Anna Rogers - Word embeddings: 6 years later [slides]

2:35 - 3:00p Sebastian Riedel - Interpretation of Natural Language Rules in Conversational Machine Reading [slides]

3:00 - 3:30p Break

3:35 - 3:42p Aaron Traylor - Learning Domain-General Reasoning by Exclusion with Neural Networks

3:42 - 3:50p Hans Chalupsky - Chameleon 2.0: Integrating Neural and Symbolic Reasoning in PowerLoom

3:50 - 4:20p Tom Kwiatkowski - Learning Representations of Entities and Relations Directly from Text [slides]

4:20 - 4:45p Andres Campero - Learning Concepts through Neural Logical Induction [slides]

5:10 - 5:30p Panel


Venue

The workshop will be co-located with the 1st conference on Automated Knowledge Base Construction at the University of Massachusetts, Amherst. Events will take place at:

Old Chapel University of Massachusetts Amherst 144 Hicks Way Amherst Massachusetts 01003 https://www.umass.edu/oldchapel/

Invited Speakers

Matt Gardner (AI2)

Reasoning Our Way to Reading

Abstract: Our current best reading systems are far below their potential, struggling to understand text at anything more than a superficial level. In this talk I try to reason out what it means to "read", and how reasoning systems might help us get there. I will introduce three reading comprehension datasets that require systems to reason at a deeper level about the text that they read, using numerical, coreferential, and implicative reasoning abilities. I will also describe some early work on models that can perform these kinds of reasoning.

Speaker Bio:

Matt is a senior research scientist at the Allen Institute for Artificial Intelligence (AI2) on the AllenNLP team, and a visiting scholar at UCI. His research focuses primarily on getting computers to read and answer questions, dealing both with open domain reading comprehension and with understanding question semantics in terms of some formal grounding (semantic parsing). He is particularly interested in cases where these two problems intersect, doing some kind of reasoning over open domain text. He is the original author of the AllenNLP toolkit for NLP research, and he co-hosts the NLP Highlights podcast with Waleed Ammar.

Tom Kwiatkowski (Google) - Learning Representations of Entities and Relations Directly from Text

Abstract:

There has been enormous recent success in using language modeling losses to learn contextualized representations of text. However, since these representations are highly context dependent, they currently have no way of connecting and condensing information from multiple texts.

In this talk, I will present some preliminary work on learning context independent representations of entities and relations from text alone. I will show that by learning to predict entities in context, we can learn standalone entity representations that are sufficiently powerful to answer questions such as 'Where did Charles Darwin write Origin of Species?'. I will also present a general purpose method of learning relation representations that abstract away from the manner in which they are expressed.

I will argue that instead of defining an ontology, and then training systems to fill it, we should design the structure of the representation that we want, and use this structure to design the losses with which we train our machine reading systems.

Bio: Tom is a Research Scientist in Google's New York office. His focus is on building representations of the knowledge that is expressed in text, and he has a particular interest in modeling the ways in which different texts agree and disagree with each other. Tom has worked on question answering products at Google, as well as engaging with academia. Before joining Google he did a PhD in Edinburgh with Sharon Goldwater and Mark Steedman, and a post-doc at the University of Washington with Luke Zettlemoyer, both with a focus on semantic parsing and grammar induction.

Sebastian Riedel (UCL/FAIR)

Interpretation of Natural Language Rules in Conversational Machine Reading

Abstract

Most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read. However, many real-world question answering problems require the reading of text not because it contains the literal answer, but because it contains a recipe to derive an answer together with the reader’s background knowledge. One example is the task of interpreting regulations to answer “Can I...?” or “Do I have to...?” questions such as “I am working in Canada. Do I have to carry on paying UK National Insurance?” after reading a UK government website about this topic. This task requires both the interpretation of rules and the application of background knowledge. It is further complicated due to the fact that, in practice, most questions are underspecified, and a human assistant will regularly have to ask clarification questions such as “How long have you been working abroad?” when the answer cannot be directly derived from the question and text. In this talk, I will present our work on ormalising this task and developing a crowd-sourcing strategy to collect 32k task instances based on real-world rules and crowd-generated questions and scenarios.

Bio

Sebastian Riedel is a researcher at Facebook AI research, professor in Natural Language Processing and Machine Learning at the University College London (UCL) and an Allen Distinguished Investigator. He works in the intersection of Natural Language Processing and Machine Learning, and focuses on teaching machines how to read and reason. He was educated in Hamburg-Harburg (Dipl. Ing) and Edinburgh (MSc., PhD), and worked at the University of Massachusetts Amherst and Tokyo University before joining UCL.

Anna Rogers (UMass, Lowell) -

Word Embeddings: 6 Years Later

Abstract:

This talk attempts to outline the overall trajectory of research on distributional meaning representations since the start of the word2vec boom in 2013 and up to the current contextualized representations. I will revisit some of the underlying assumptions and key results, with a particular focus on the shift in evaluation paradigms and the likely directions for the future.

Speaker bio:

Anna is a post-doctoral associate in the Computer Science Department, University of Massachusetts (Lowell). She works at the intersection of linguistics, natural language processing, and machine learning. Her current projects span intrinsic evaluation of meaning representations, temporal and analogical reasoning, and question answering. She is one of the organizers of the upcoming 3rd Workshop on Evaluating Vector Space Representations for NLP, co-located with NAACL 2019.

Andres Campero (MIT)

Learning Concepts through Neural Logical Induction

Abstract: We present and discuss a framework for that learns concepts which have symbolic and subsymbolic content, acquiring some of the advantages of both. In the context of semantic knowledge presented as a set of facts of the form Relation[Subject,Object] (Father[Alberto, Andres]), a generative algorithm combines logical reasoning under forward induction with neural networks to simultaneously learn the logical structure underlying the data (GRANDFATHER[X,Y]<-- FATHER[X,Z], PARENT[Z,Y]) and dense vector representations for the relations.

Speaker Bio:

I am a PhD student in the Brain and Cognitive Sciences Department (BCS) and in the Computer Science and Artificial Intelligence Laboratory (CSAIL) a MIT. I am interested in the interaction of symbolic probabilistic reasoning and sub-symbolic statistical learning in the attempt to understand and replicate higher level cognition in a human-cognitively meaningful way. To do so, I explore models that combine structured generative frameworks like probabilistic programs, with deep learning. I care about things like compositionality and the origin of concepts. My advisor is Josh Tenenbaum.

Call for Papers

We solicit new and visionary work on neural and symbolic methods for representation and reasoning. Examples of relevant topics include:

  • Extensions and integrations of disparate symbolic representations
  • Methods for neural reasoning in high-dimensional vector spaces
  • Seamless translations between symbolic and neural representation and reasoning
  • Improving representational convergence of symbolic and neural methods
  • Symbolic interpretations for explaining neural reasoning
  • Analogical reasoning from neural or symbolic perspectives

We will accept:

  • 2-page extended abstracts
  • 4-page short papers
  • 8+-page extended versions of conference papers

Page limits do not include any references.

All papers will be non-archival and presented as posters or contributed talks.

Please submit papers via EasyChair: https://easychair.org/conferences/?conf=nsrrakbc19

Important Dates

    • Submission Deadline: 4/26/19
    • Paper Notifications: 5/5/19
    • Final version: 5/10/19
    • Workshop: 5/22/19

Organizers: