Optimal Operable Power Flow: Sample-efficient Holomorphic Embedding-based Reinforcement LearningÂ
Ahmed Rabee Sayed, Xian Zhang, Guibin Wang, Cheng Wang, Jing Qiu
IEEE Transactions on Power Systems, 13 April 2023
Optimal Operable Power Flow: Sample-efficient Holomorphic Embedding-based Reinforcement LearningÂ
Ahmed Rabee Sayed, Xian Zhang, Guibin Wang, Cheng Wang, Jing Qiu
IEEE Transactions on Power Systems, 13 April 2023
Abstract
The nonlinearity of physical power flow equations divides the decision-making space into operable and non-operable regions. Therefore, existing control techniques could be attracted to non-operable mathematically-feasible decisions. Moreover, the raising uncertainties of modern power systems need quick-optimal actions to maintain system security and stability. This paper proposes a holomorphic embedding-based soft actor-critic (HE-SAC) algorithm to find fast optimal operable power flow (OOPF) by leveraging deep reinforcement learning and advanced complex analysis techniques. First, a dynamic HE-based layer is developed to guarantee the solution operability and uses the previous operable germ instead of the no-load germ for high computational efficiency. Second, a model-based policy optimization is built based on a novel predictive model to generate more data and raise the sample efficiency of the SAC algorithm. Third, the reward function is augmented with the degree of constraint violations and policy entropy to enhance the solution's feasibility. Simulation results demonstrate the computational performance of the proposed dynamic HE layer, and the surpassing of the proposed HE-SAC over a number of state-of-the-art model-free RL algorithms and optimization methods. The proposed approach indicates its practicability and fast operable control for power system operation.
Keywords
Reinforcement learning, Holomorphic embedding, Operable power flow, Soft actor-critic algorithm, Model-based policy optimization