Abstract:
In order to avoid dimension disaster and improve controllability, a fast reinforcement learning control method for large-scale power network system based on state dimension reduction is proposed. The compressed state vector was constructed by projecting the measured state through the projection matrix, and the main controllable subspace of the open-loop network model was captured, so the dimension disaster was avoided by using the low rank attribute of network controllability. A reduced dimension state depth learning controller was proposed to make the result cost close to the optimal LQR cost. The experimental results of consensus network system and IEEE wide area control show that the proposed method can significantly accelerate the learning time and ensure better sub-optimal performance.