Learning with Rich Experience:
Integration of Learning Paradigms
Machine learning is broadly defined as computational methods that use experience to learn concepts and improve performance. Rich paradigms of learning algorithms have been developed over decades of research, each of which makes different assumptions on the types of experience available, including data labels (e.g., supervised learning), interaction with the environment (e.g., reinforcement learning), structured knowledge (e.g., posterior regularization), other models (e.g., adversarial learning, knowledge distillation), and so forth. All these paradigms have each received in-depth studies in machine learning and many application domains. However, these algorithms have established distinct formalisms of learning, which are narrowly limited to making use of only a single or few types of experience. This narrow scope limits the applicability of the algorithms, and shows a major defect compared to human learning which has the hallmark of flexibly using diverse sources of information to improve.
It thus will be particularly beneficial to study the underlying connections between the individual paradigms, explore technique transfer and integration, and make use of rich sorts of experience. This one-day workshop aims to be a platform for exchanging ideas regarding theoretical and algorithmic foundations of learning with rich experience, identifying key challenges in the field, and establishing the most exciting future directions.
Relevant topics include but are not limited to:
- Works that theoretically study varying modes of learning and supervision and their formal connections
- Works that combine multiple modes of supervision, such as rewards, expert demonstrations, and language
- Works that learn from non-traditional or weak forms of supervision, such as structured knowledge, human preferences, or dialogue
- Works that present novel unification of distinct algorithms for improved modeling and learning in general
- Benchmarks or real-world applications of these problem settings