Workshop 4: What are the Next Measurable Challenges in AI?

Building systems that can integrate learning, reasoning and optimization has long been a dream for artificial intelligence. One of the major challenges, within this context, is certainly to evaluate novel ideas and frameworks on appropriate benchmarks. Too often, in fact, the tasks and the datasets that are considered and proposed for experimental evaluation are tailored to some algorithms or methodologies, and limited to ad-hoc scenarios and application domains. More in general, they lack an open and wider perspective to test the considered approaches across a variety of different tasks and under different conditions, making experimental comparisons hard to obtain.


Can we define a set of requirements for a challenge/benchmark that goes beyond those currently available?

Can we do it with the goal of having a benchmark (or rather a benchmarking framework maybe) that meets these requirements and can still be implemented in a reasonable time? possibly building on top of existing ones?

Program

13:00-13:15 Doors open


Introduction

13:15-13:30 Introduction & Expectations - Luc de Raedt

13:30-14:00 Invited Talk: Lessons Learned at NeurIPS 2021 Datasets and Benchmarks - Joaquin Vanschoren


PART I (grounding the discussion to literature)

14:00-14:15 Presentation Datasets/Systems Tables - Marco Lippi

14:15-15:30 Discussion on Tables - Working groups

15:30-15:45 Break

PART II (widening the perspective)

15:45-16:45 Panel on Limitations of Existing Benchmarks and New Challenges - Andrea Passerini

  • Marco Gori

  • Joaquin Vanschoren

  • Kristian Kersting

  • Michele Sebag

  • Fosca Giannotti

16:45-18:00 Discussion on Panel - Working groups


Conclusions

18:00-18:15 What’s Next? - Luc de Raedt