Develop a New Solution for Addressing Robustness in Reinforcement Learning
Improved the generalization performance of reinforcement learning models by applying style transfer techniques.
Presented a novel approach to addressing robustness issues in reinforcement learning by training agents resilient to various style transformations.
Combining Style Transfer with Reinforcement Learning
Combine style transfer techniques with reinforcement learning to train models that are robust to visual variations.
Separate content and style information to enable agents to maintain consistent performance across diverse visual environments.
Reinforcement learning models are prone to overfitting to new environments or visual variations, which can degrade their performance in real-world applications.
To enhance the robustness of reinforcement learning models, it is necessary to develop models that can maintain high performance even with various visual transformations.
Develop a data augmentation method using style transfer to ensure that reinforcement learning agents maintain robust performance under diverse visual variations and noise.
Combine style transfer with reinforcement learning to train agents capable of adapting to various environmental changes.
Style Transfer Training
Generated various visual transformations using style transfer and applied them to reinforcement learning.
Encoder-Decoder structure
AdaIN structure
Style Tranfer & RL both training
Proper image generation through style transfer is essential for benefiting reinforcement learning.
⇒ Goal: Improve style transfer performance!
Style Transfer
Drafting and revision: Laplacian pyramid network is used for fast, high-quality artistic style transfer.
Train style transfer model
Mujoco + Monet style
Mujoco + Random style
Stytr2: Image style transfer with transformers.
⇒ StyTR2 deemed suitable for style transfer in reinforcement learning environments.
Style Transfer (StyTR2 + RL)
Applied StyTR2 model to reinforcement learning
While the results of style transfer are good, RL performance is lacking.
Need to improve RL model learning performance.
Experiments
Added Encoder Loss
Applied CURL Model
Added Dynamics to RL Model
CLS Token for Image Embedding
Style transfer with stacked input
K-step Style transfer
CURL + K-step style transfer
We attempted to conduct robust reinforcement learning against noise by applying style transfer methods.
Due to the differences in learning methods between style transfer and reinforcement learning, direct application was challenging.
Maintaining content in style transfer is crucial for reinforcement learning, but it is challenging to implement in a model.
Gained extensive experience with both reinforcement learning and style transfer through various experiments.
If content can be maintained in a way that benefits reinforcement learning, it is considered a promising area for future research.