This is the support material for paper : Salience-Invariant Consistent Policy Learning for Generalizing Visual Reinforcement Learning
Abstract:
Generalizing policies to unseen scenarios remains a critical challenge in visual reinforcement learning, where agents often overfit to the specific visual observations of the training environment. In unseen environments, distracting pixels may lead agents to extract representations containing task-irrelevant information. As a result, agents may deviate from the optimal behaviors learned during training, thereby hindering visual generalization. To address this issue, we propose the Salience-Invariant Consistent Policy Learning (SCPL) algorithm, an efficient framework for zero-shot generalization. Our approach introduces a novel value consistency module alongside a dynamics module to effectively capture task-relevant representations. The value consistency module, guided by saliency, ensures the agent focuses on task-relevant pixels in both original and perturbed observations, while the dynamics module uses augmented data to help the encoder capture dynamic- and reward-relevant representations. Additionally, our theoretical analysis highlights the importance of policy consistency for generalization. To strengthen this, we introduce a policy consistency module with a KL divergence constraint to maintain consistent policies across original and perturbed observations. Extensive experiments on the DMC-GB, Robotic Manipulation, and CARLA benchmarks demonstrate that SCPL significantly outperforms state-of-the-art methods in terms of generalization. Notably, SCPL achieves average performance improvements of 14\%, 39\%, and 69\% in the challenging DMC video hard setting, the Robotic hard setting, and the CARLA benchmark, respectively.
Overview of SCPL. SCPL contains three components: a value consistency module, a policy consistency module, and a dynamics module. The value consistency module enables the agent to consistently capture precise task-relevant pixels in both original and augmented observations for accurate value estimation. Then, SCPL regularizes the policy network with the policy KL divergence constraint between original and augmented observations, enabling the agent to make stable decisions in test environments. A dynamic model with original and augmented data is proposed as an auxiliary task to encourage encoder to provide robust embeddings for the value function and policy network.
Saliency masked map of SVEA, SGQN, and SCPL (ours), which shows the attention regions of value functions on the DMC-GB benchmark. SVEA pays similar attention to original and unseen observations, but perturbed pixels are included. SGQN focuses well on the original observations but exhibits inconsistent attention to perturbed observations. Our method can capture precise and consistent attention regions for both.
The KL divergence of action distribution between training and test environments on DMC-GB, where our method holds the smallest KL divergence. The smaller the KL divergence, the more stable the policy for perturbed observations.
SCPL
SGQN
SVEA
SAC
SCPL
SGQN
SVEA
SAC
Walker walk
Cartpole Swingup
Saliency attribute map
Saliency attribute map
Saliency attribute masked map
Saliency attribute masked map
The above gifs demonstrate that SCPL enables the agent to focus on even more precise task-relevant regions and ignore task-irrelevant pixels in the perturbed observation.
t-SNE maps of embeddings and actions learned with SCPL, SGQN, SVEA and SAC. We obtained t-SNE for 20 motion situations by randomly selecting 40 backgrounds from video hard. The color represents each motion situation, and each dot represents representations or actions. As shown in above figure, SCPL is capable of generating more robust representation and stable policy.
SCPL
Reward: 978
Reward: 982
Reward: 798
Reward: 960
Reward: 957
SGQN
Reward: 874
Reward: 868
Reward: 740
Reward: 910
Reward: 624
SVEA
Reward: 907
Reward: 267
Reward: 352
Reward: 702
Reward: 443
SAC
Reward: 42
Reward: 165
Reward: 155
Reward: 0
Reward: 6
Task: Peg in Box. Train (first colunm) and test (others).
SCPL
Reward: 218
Reward: 216
Reward: 217
Reward: 199
Reward: 213
SGQN
Reward: 192
Reward: 189
Reward: -106
Reward: 51
Reward: -61
SCPL: clear noon
Distance:887m
wet cloudy noon
Distance: 505m
soft rain sunset
Distance:306m
wet sunset
Distance:339m
SGQN: clear noon
Distance:761m
wet cloudy noon
Distance: 263m
soft rain sunset
Distance:58m
wet sunset
Distance:91m