Oral Presentations
Poster Presentations
Control Graph as Unified IO for Morphology-Task Generalization
Active Acquisition for Multimodal Temporal Data: A Challenging Decision-Making Task
Pre-Training for Robots: Leveraging Diverse Multitask Data via Offline Reinforcement Learning
LMPriors: Pre-Trained Language Models as Task-Specific Priors
Hyper-Decision Transformer for Efficient Online Policy Adaptation
On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning
Pareto-Efficient Decision Agents for Offline Multi-Objective Reinforcement Learning
Using Both Demonstrations and Language Instructions to Efficiently Learn Robotic Tasks
Offline Reinforcement Learning from Heteroskedastic Data Via Support Constraints
Offline Q-learning on Diverse Multi-Task Data Both Scales And Generalizes
Return Augmentation gives Supervised RL Temporal Compositionality
Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks
Planning With Large Language Models Via Corrective Re-Prompting
SMART: Self-supervised Multi-task pretrAining with contRol Transformers
Skill Acquisition by Instruction Augmentation on Offline Datasets
ConserWeightive Behavioral Cloning for Reliable Offline Reinforcement Learning
Wall Street Tree Search: Risk-Aware Planning for Offline Reinforcement Learning
A Mixture-of-Expert Approach to RL-based Dialogue Management
Large Language Models Still Can't Plan (A Benchmark for LLMs on Planning and Reasoning about Change)
Build generally reusable agent-environment interaction models
CLaP: Conditional Latent Planners for Offline Reinforcement Learning
Foundation Models for Semantic Novelty in Reinforcement Learning
Multi-step Planning for Automated Hyperparameter Optimization with OptFormer
Adapting Pretrained Vision-Language Foundational Models to Medical Imaging Domains
Deep Transformer Q-Networks for Partially Observable Reinforcement Learning
Foundation Models for History Compression in Reinforcement Learning
What Makes Certain Pre-Trained Visual Representations Better for Robotic Learning?
PACT: Perception-Action Causal Transformer for Autoregressive Robotics Pretraining
Revealing the Bias in Large Language Models via Reward Structured Questions
Elicitation Inference Optimization for Multi-Principal-Agent Alignment
Supervised Q-Learning can be a Strong Baseline for Continuous Control
Constrained MDPs can be Solved by Eearly-Termination with Recurrent Models
Understanding Hindsight Goal Relabeling Requires Rethinking Divergence Minimization
Contextual Transformer for Offline Meta Reinforcement Learning