Language-Learning-Logic Workshop (3L 2017)

Jointly organised by Imperial College's Department of Computing and Data Science Institute

Location: Imperial College London, South Kensington Campus, UK

Venue: Huxley Building, Lecture Theather 311

Date: 21 September 2017

There is widespread agreement that AI has grown over the years into a fragmented landscape of sub-disciplines with a narrow focus. Specialisation has certainly facilitated great advances within the sub-disciplines and AI overall, but fragmentation has created a silos mentality which hinders cross-fertilisation and further advancements within AI.

This workshop will bring together researchers from three broad AI areas, namely natural language processing (NLP), machine learning (ML) and logic-based symbolic AI, to discuss and explore opportunities for cross-fertilisation centered around NLP. These may include, for example:

  • the use of (existing or new) ML methods to support the extraction of knowledge from text via NLP methods to provide an input for (existing or new) AI symbolic reasoning methods;
  • the integration of symbolic AI within ML to face NLP challenges;
  • the integration of ML and NLP techniques to face various AI challenges, e.g. image understanding, emotion detection, as well as human-agent and human-robot interactions.

The workshop also aims at discussing and identifying possible directions for future research at the intersection of NLP, ML and logic.

You can find the program here.

Invited speakers

Stephen H. Muggleton (Imperial College London)

"Meta-Interpretive Learning of Language in Logic"

Meta-Interpretive Learning (MIL) is a recent Inductive Logic Programming technique aimed at supporting learning of recursive definitions. A powerful and novel aspect of MIL is that when learning a predicate definition it automatically introduces sub-definitions, allowing decomposition into a hierarchy of reuseable parts. MIL is based on an adapted version of a Prolog meta-interpreter. Normally such a meta-interpreter derives a proof by repeatedly fetching first-order Prolog clauses whose heads unify with a given goal. By contrast, a meta-interpretive learner additionally fetches higher-order meta-rules whose heads unify with the goal, and saves the resulting meta-substitutions to form a program. The talk will summarise applications of MIL including the learning of regular and context-free grammars, and learning string transformations for spreadsheet applications. The talk will conclude by pointing to the many challenges which remain to be addressed within this new area.


Roberto Navigli (Sapienza University of Rome)

"Multilinguality for free, or why you should care about linking to (BabelNet) synsets"

Multilinguality is a key feature of today’s Web and a pervasive one in an increasingly interconnected world. However, many semantic representations, such as word (and often sense) embeddings, are grounded in the language from which they are obtained. In this talk I will argue that there is a pressing need to link our meaning representations to large-scale multilingual semantic networks such as BabelNet, and will show several tasks and applications where multilingual representations of meaning provide a big boost, including key industrial use cases from Babelscape, our Sapienza startup company.


André F.T. Martins (Unbabel)

"AD3 and Sparsemax: Structured Inference for Natural Language Processing"

In the first part of the talk, I will present AD3, a new algorithm for approximate inference on factor graphs. AD3 has a modular architecture, where local subproblems are solved independently, and their solutions are gathered to compute a global update. I will show how to solve these AD3 subproblems for dense and structured factors, as well as factors imposing first-order logic constraints, and I will end by describing experiments on dependency parsing.

In the second part of the talk, I will propose sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities. After deriving its properties, I will show how its Jacobian can be efficiently computed, enabling its use in a neural network trained with backpropagation. I will show promising empirical results in attention-based neural networks for natural language inference.


Björn W. Schuller (Imperial College London)

"Deep Profiling: What Your Words Tell a Neural Net these Days"

Spoken and written language-based profiling has become increasingly "deep" in recent times in two ways: The information assessed automatically about the "person behind the words" has become richer and "deeper" revealing ever more author or speaker characteristics and information on her state. At the same time, one finds increasingly deep neural learning architectures to best fulfil the job from an algorithmic point of view. In this context, the talk sets off with a short overview on the current state in spoken and written language-based profiling. It then deals with autoencoder and long-short-term memory based approaches towards natural language processing for according tasks such as emotion, personality, or health state recognition from one’s words. Inspirations are further given on how to inject knowledge databases and a seamless integration with further modalities in an increasingly "end-to-end" learning metaphor.

Selected speakers

Peter Schüller (Marmara University) "ASP-based Inductive Logic Programming applied to Phrase Chunking: Challenges and Improvements"

Christos Christodoulopoulos (Amazon Research Cambridge) "Simple Large-scale Relation Extraction from Unstructured Text"

Hatem Mousselly-Sergieh (TU Darmstadt) "Neural, Multimodal, Energy-based Approach for Knowledge Graph Completion"

James Thorne (University of Sheffield) "Introducing FEVER: a large-scale dataset for Fact Extraction and VERification"

Teresa Botschen (TU Darmstadt) "How to find the right multimodal representations for Frame Identification"

Oana Cocarascu (Imperial College London) "Identifying argumentative relations using deep learning"

PARTICIPATION

Participation in the workshop is free, subject to availability.

To attend and give a presentation, please submit a 2-page abstract of your intended presentation via Easy Chair.

To attend without presenting, please submit a half-page summary of your research interests and previous (relevant) research, also via Easy Chair.

The deadline for submissions is 15 September 2017. Notifications will be sent on a rolling basis as submissions are received.

Organizers

Oana Cocarascu

Miguel Molina-Solana

Francesca Toni


Any questions? Send them to 3l@easychair.org