Based on the built abstract model, we further perform safety analysis on two directions: online safety monitoring and offline falsification.
The model-based safety monitoring is based on probabilistic model checking and safety query. Based on the results of PMC, we further introduce a corresponding switching control strategy that switches between the AI controller and a predefined traditional safety controller for keeping the AI-CPS safe, in the commonly adopted safety redundancy contexts (see Fig. 3). If the PMC outputs "unsafe", the traditional safety controller will be activated to ensure the safety of the system. Otherwise, the AI controller continues to determine the actually applied actions for increasing the overall efficiency of the AI-CPS. By using the proposed online safety monitoring technique, we can potentially design a hybrid control system that takes advantage of both the efficient AI controller and the traditional controller.
For offline falsification, existing work shows that the state-of-the-art falsification is not effective in falsifying AI-CPS. Thus, we propose a novel model-guided falsificaiton technique. Specifically, we first adopt randomly generated input signals as candidate signals. In the global search stage, for the candidate signal, which is generated randomly or based on the local search, we perform PMC on the abstract model with the signal, w.r.t. the safety query. If the PMC returns "unsafe", which means the candidate could lead to an unsafe region, we put the signal into a queue for local search. The local search process is realized by using stochastic optimization approaches, e.g., hill-climbing optimization, which are often known as effective in local exploitation. The signal, returned by the local search, is then sent back to the global search procedure as a promising candidate signal.