Synergies Among Neuro-Symbolic, Graph Embeddings and Language Models

Date: 23 November 2022

Location: Celestijnenlaan 200A, Department of Computer Science, Room C++ (05.01)

Program:

8:50-9:00 Opening Remarks - Emanuele Sansone

9:00-12:30 Getting to Know Others’ Work – Morning Talks (15 mins + 5 mins questions) - each talk 1 work

9:00-10:30 Neuro-symbolic learning - KUL + TUDA

  • Robin Manhaeve “DeepProbLog: Neural Probabilistic Logic Programming” NeurIPS 2018

  • Thomas Winters “DeepStochLog: Neural Stochastic Logic Programming” AAAI 2022

  • Zihan Ye “Differentiable Meta-Logical Programming” (online)

  • Devendra Dhami “αILP: Thinking Visual Scenes as Differentiable Logic Programs” - Machine Learning 2022 (online)

  • Summary of block (10 mins) - challenges/research questions - Robin Manhaeve

10:30-11:20 Knowledge graph embeddings - KUL + CINI

  • Pietro Barbiero “Concept Embedding Models” NeurIPS 2022 (online)

  • Francesco Giannini “Relational Reasoning Networks”. under review at TNNLS

  • Summary of block (10 mins) - challenges/research questions - Francesco Giannini

11:20-11:30 Break

11:30-12:40 Language models and graph embeddings - Fraunhofer

  • Richard Rutmann “Large Language Models”

  • Charvi Jain “Knowledge Integration Into Language Models”

  • Alexander Weber “Retriever-Based Language Models”

  • Summary of block (10 mins) - challenges/research questions - Mehdi Ali

12:40-13:00 Commonsense reasoning with language models - EPFL

  • Debjit Paul “Neuro-Symbolic Commonsense Reasoning in NLP” (online)

13:00-14:00 Lunch break

14:00-14:30 Presentation of online partners - 5 minutes each

14:30-16:30 Identifying Common/New Challenges - Afternoon

Goal: identify a common research question/problem - define actionable points for follow up work

  • 14:30 -16:00 Discussion in focus groups

  • 16:00 -16:30 Final presentation about outcomes