Learning with Rich Experience (LIRE)

Integration of Learning Paradigms

December 13, 2019 @ NeurIPS 2019

Thanks for attending LIRE 2019!

Videos & slides are available in the Schedule tab.

The accepted papers and their PDFs are available in the Accepted Papers tab.

Overview

Machine learning is broadly defined as computational methods that use experience to learn concepts and improve performance. Rich paradigms of learning algorithms have been developed over decades of research, each of which makes different assumptions on the types of experience available, including data labels (e.g., supervised learning), interaction with the environment (e.g., reinforcement learning), structured knowledge (e.g., posterior regularization), other models (e.g., adversarial learning, knowledge distillation), and so forth. All these paradigms have each received in-depth studies in machine learning and many application domains. However, these algorithms have established distinct formalisms of learning, which are narrowly limited to making use of only a single or few types of experience. This narrow scope limits the applicability of the algorithms, and shows a major defect compared to human learning which has the hallmark of flexibly using diverse sources of information to improve.

It thus will be particularly beneficial to study the underlying connections between the individual paradigms, explore technique transfer and integration, and make use of rich sorts of experience. This one-day workshop aims to be a platform for exchanging ideas regarding theoretical and algorithmic foundations of learning with rich experience, identifying key challenges in the field, and establishing the most exciting future directions.

Relevant topics include but are not limited to:

  • Works that theoretically study varying modes of learning and supervision and their formal connections
  • Works that combine multiple modes of supervision, such as rewards, expert demonstrations, and language
  • Works that learn from non-traditional or weak forms of supervision, such as structured knowledge, human preferences, or dialogue
  • Works that present novel unification of distinct algorithms for improved modeling and learning in general
  • Benchmarks or real-world applications of these problem settings

Invited Speakers

Important Dates

  • Paper submissions due: September 11, 2019, Anywhere On Earth (UTC-12)
  • **Late-breaking submission deadline**: September 30, 2019
  • Acceptance notification: October 1, 2019
  • Camera-ready paper submission due: December 6, 2019
  • Workshop: December 13 (Friday)