Deep reinforcement learning has shown remarkable success in continuous control tasks, yet often requires extensive training data, struggles with complex, long-horizon planning, and fails to maintain safety constraints during operation. Meanwhile, Model Predictive Control (MPC) offers explainability and constraint satisfaction but typically yields only locally optimal solutions and demands careful cost function design. This paper introduces the Q-guided Stein variational model predictive Actor-Critic (Q-STAC), a novel framework that bridges these approaches by integrating Bayesian MPC with actor-critic reinforcement learning through constrained Stein Variational Gradient Descent (SVGD). Our method optimizes control sequences directly using learned Q-values as objectives, eliminating the need for explicit cost function design while leveraging known system dynamics to enhance sample efficiency and ensure control signals remain within safe boundaries. Extensive experiments on 2D navigation and robotic manipulation tasks demonstrate that Q-STAC achieves superior sample efficiency, robustness, and optimality compared to state-of-the-art algorithms, while maintaining the high expressiveness of policy distributions.
A comprehensive framework of our proposed algorithm: Q-STAC. The architecture consists of four main parts: (1) an Actor initializes prior distributions from networks and sample control particle trajectories from them. (2) Control particles optimized by inference Bayesian MPC with SVGD based on Q-values, (3) a Critic network for Q-value estimation, (4) an Environment module containing the agent interaction to achieve goal state.
Q-STAC (Ours)
S2AC
SAC
TD3
PPO
SAC
TD3
PPO
Q-STAC (Ours)
S2AC
SAC
TD3
PPO
Q-STAC (Ours)
S2AC
SAC
TD3
PPO
Q-STAC (Ours)
S2AC
SAC
TD3
S2AC
PPO
Q-STAC (Ours)
SAC
TD3
S2AC
PPO
Q-STAC (Ours)
SAC
S2AC
Q-STAC (Ours)