This graduate seminar explores Active Representation Learning (ARL), an emerging area at the intersection of active learning, sequential decision-making, and representation learning. ARL asks how an agent can actively acquire data (via queries or actions) to learn effective representations that improve task performance. We will cover foundational concepts in active learning, bandits, and reinforcement learning, then discuss recent research on task-aware active learning, goal-directed exploration (e.g. in robotics), generative active learning, and active fine-tuning of foundation models.
This course includes a tutorial (week 1-3) on the fundamentals of active learning & decision making, (task-aware) representation learning, and learning for structured exploration, followed by discussions of research papers exhibiting the key practical challenges and recent advancements.
Students will read and discuss recent research papers, lead student lectures (from week 4-8), as well as conduct a research project in groups in relevant research areas, e.g., to investigate ARL in both applied domains (robotics, vision, science) and fundamental settings (theory of active learning and representation learning).
Lectures on Tu/Th 2:00pm-3:20pm in RY 251
Office hour (by appointment): Tu/Th 3:30pm-4:00pm in JCL 317
First lecture: 3/25/2025
Announcements on Canvas
Discussion via Ed Discussion
Yuxin Chen: chenyuxin@uchicago.edu
Paper reviews and lecture presentation
Research project (Report + poster presentation):
The research project is optional for students who take the course as a seminar, and mandatory for students who take the course as an elective.
CMSC 35400/TTIC 31020 or equivalent, or permission from instructor
Part I: Tutorials
Week 1: Fundamentals of active Learning & decision making
Week 2: Bandits & Deep Exploration
Week 3: Reinforcement learning & representation learning in decision making
Part II: Student-led paper discussion
Week 4-8
Deep active learning (scalable acquisition strategies and efficient labeling)
Generative active learning (data synthesis and selection; learning active learning)
Representation learning for bandits (neural bandits and exploration in high dimensions)
Goal-oriented exploration (active multi-task representation learning; goal-conditioned curiosity & exploration; goal generation and curriculum)
Active fine-tuning of foundation models (active preference learning for LLM alignment; active in-context learning (demonstration selection)).
Week 9: Student projects