The recent advancements in AI technology have been remarkable in natural language processing. However, applying such techniques to time-series data is not straightforward. This is because, while changing parts of a sentence can still convey meaning, similar alterations in a sinusoidal wave signal for example may render it no longer recognizable as a sinusoidal wave. In other words, the order itself, rather than individual values, is crucial. Therefore, to dynamically predict and control the behavior of future time-series data, rather than relying solely on the adaptation of existing machine learning techniques, it would necessitate control theory that explicitly enables us to handle dynamical systems. Examples of past research along with this interdisciplinary research vision are presented as follows.
Small-data driven control: The buzzword "big data" is not good because it is often incorrect when predicting and controlling dynamical systems. For instance, when aiming to forecast heatstroke incidents on Ginza Chuo Street, the number of sensors that can be deployed is significantly limited compared to the area's size, and the constant drastic changes of its circumstances make past data insufficient. Instead, constructing a theory of "small-data" driven control that answers questions such as how much data is needed for achieving the desired objectives and what can be accomplished with less data is crucial. This and this literature provide a part of the answers to these questions, contributing to the foundational theory of constructing safe control systems through collaborative research with Professor Kiminao.
Policy gradient for control: Policy gradient methods, which iteratively improve policies from limited data, are widely used in recent reinforcement learning. However, their application to dynamical systems is not well-established, especially in its mathematical aspects. As one research outcome toward this goal, based on the mathematical tools developed in the aforementioned research and non-convex optimization, we demonstrated the global linear convergence of policy gradient methods for designing a class of dynamical output-feedback controllers. Furthermore, this result has been extended to LQG controller design. Further extensions of this approach to nonlinear systems and distributed learning deserve to be future topics.
The complexity of power systems is increasing with the rapid penetration of renewable energy sources, such as solar power plants, and the entry of numerous transmission system operators. Due to such structural changes, the methodologies developed in power engineering so far are becoming fundamentally inadequate. Instead, there is an urgent need for the development of control-theory-guided power engineering that can adapt to changes and ensure both supply balance and stability while fully utilizing renewable energy resources. The following are examples of research results:
Power flow optimization for balancing economy and stability: The stability of the power system significantly depends on the power flow. However, traditional power flow designs have focused solely on economic considerations, neglecting stability in terms of dynamics. In this research with Professor Masaki at Keio University and Professor Chakrabortty at North Carolina State University, we proposed a new method to explore power flow that enhances damping performance without significantly compromising economic optimality. This method adjusts the reactive power of generators as control parameters to search for a Pareto optimal solution between fuel costs and stability. Recently, we have applied the research findings to the charging scheduling problem of electric vehicles in distribution grids.
Control-theoretic analysis of wind power generator: It is known that connecting doubly fed induction generator (DFIG) type wind turbines to the grid can induce oscillations in power, and in some situations, oscillation controls become difficult. This implies that wind power grid connections can become vulnerable to disturbances such as lightning, and improving stability is difficult with any control method. In this research with Professor Chakrabortty, a theoretical analysis of the controllability of DFIG was conducted to identify conditions under which control becomes difficult. Additionally, we proposed a highly controllable new DFIG mechanism. The results are investigated through a benchmark model called the 68-bus test system.