A brief description of the project

Summary

In this project we consider the problem of specifying and verifying the behaviour of systems characterised by a close interaction of a program with the environment. The latter includes user inputs, multiple heterogeneous interacting devices, and all the (physical) phenomena determining the conditions under which a program has to perform.

In this setting, the main challenge for formal analysis and verification is the unpredictability of the environment, which is usually characterised by a highly dynamical behaviour. For instance, some devices may appear, disappear, or become temporarily unavailable; arbitrary decisions by the users can change the behaviour of some components; some unexpected behaviour caused by the interplay of events that cannot be observed in practice or that were not predicted at design time may occur. The introduction of uncertainties and approximations in design, modelling and analysis of these systems is therefore inevitable to achieve some degree of system robustness and resilience. Moreover, each program should be able to adapt its behaviour with respect to the current environmental conditions.

Our main objective is to address the above-mentioned challenges by developing a

general framework for the modelling, analysis, and verification of systems under uncertainties.

In detail we aim at:

  1. Defining a framework for the analysis of the robustness and adaptability of programs.

  2. Developing efficient verification techniques for the considered systems.

Motivation

With the ever-increasing complexity of the digital world and the massive diffusion of Internet-of-Things (IoT) systems, cyber-physical systems, and smart devices, we are progressively witnessing the rise of a new class of software applications, heceforth programs, that must be able to deal with highly changing operational conditions, like the software components of unmanned vehicles, (on-line) service applications, the devices in a smart house, etc. This is due to the fact that each program has to interact with other programs and heterogeneous devices, and with physical phenomena like wind, temperature, etc. In what follows, we will use the term environment to denote the ensemble of all the entities determining the conditions under which a program has to perform, and the term system to denote the combination of the environment and the program acting on it. The program gleans information on the current environmental conditions, in terms of some data obtained via its sensors or some input received from the environment, in order to adjust its behaviour with respect to them. At the same time the program can (partially) modify the activity of the environment via its actuators, or some output. Hence, the behaviour of a system is the result of the program-environment interplay.

The main challenge in the analysis and verification of these systems is then the dynamical and, sometimes, unpredictable behaviour of the environment. The highly dynamical behaviour of physical processes can only be approximated in order to become computationally tractable and it can constitute a safety hazard for the devices in the system (such as an unexpected gust of wind for a drone that is autonomously setting its trajectory to avoid obstacles); some devices or programs may appear, disappear, or become temporarily unavailable; faults or conflicts may occur (for example the program responsible for the ventilation of a room in a smart home may open a window causing a conflict with the program that has to limit the noise level); sensors may introduce some measurement errors; etc. The introduction of uncertainties and approximations in these systems is therefore inevitable. Under the term uncertainties, we subsume both the effects of inessential or unknown details of the evolution of physical quantities, which may be also impossible to observe in practice, and those of events in the environment that cannot be controlled by a program (for example, a program controlling a drone cannot prevent a gust of wind to occur, but it can adapt its behaviour once the presence of wind is detected).

As uncertainties can be partially quantified, in the literature we can find a wealth of proposals of stochastic and probabilistic models, like Stochastic Hybrid Systems and Markov Decision Processes, and ad hoc solutions for specific application contexts. Yet, in these studies either the environment is not explicitly taken into account or it is modelled only deterministically. Optimisation techniques consider only a single objective at a time, and simulation and experimental validation of systems are usually carried out under convenient assumptions, like an exact knowledge of the values of input/output variables. Moreover, due to the variety of applications and systems, no general formal framework to deal with these challenges has been proposed so far. The lack of suitable abstractions and of an automatic support makes the analysis and verification of the considered systems difficult, laborious, and error prone.

Our aims

With this project we aim at giving our contribution to the above-mentioned challenges.

Our idea is to favour the modelling of the program-environment interplay over precise specifications of the operational behaviour of a program. Hence, we propose a general framework, based on formal semantics, that will allow us to develop novel analysis and verification techniques for systems under uncertainties, while keeping the meta-theory as simple as possible. In particular, we are interested in developing tools for assessing the ability of programs to adapt their behaviour with respect to the environmental conditions even under uncertainties.

To this end, we will tackle the three research questions described below.


  • Research question 1: Model checking systems under uncertainties.

A common feature of the formal models proposed in the literature to capture both the qualitative and quantitative behaviour of systems, is that the (quantitative, labelled) transitions expressing the computation steps directly model the behaviour of the system as a whole.

In this project, we take a different point of view: we adopt the (discrete time) model by Castiglioni, Loreti and Tini and represent the program-environment interplay in terms of the changes they induce on a set of application-relevant data. Informally, we could model a system as a triple (P,E,D) where P is a process modelling the behaviour of the program, E models the activity of the environment, and D is a set of data "shared" by P and E. At each step, the interplay of P and E induces some changes on D. As these changes are subject to the presence of uncertainties, it is natural to represent them as a probability measure on the attainable sets of data. We remark that E can be any function modelling the environment, hence there are no restrictions a priori on the kind of perturbations that can be considered. The behaviour of the system is then entirely expressed by its evolution sequence, i.e., the sequence of probability measures obtained at each step.

Clearly, this uncertain, stochastic, behaviour needs to be taken into account in the specification, and the consequent verification, of \emph{requirements} over systems. In the literature, quantitative extensions of model checking have been proposed, like stochastic (or probabilistic) model checking, and statistical model checking. These techniques usually rely either on a full specification of the system to be checked, or on the possibility of simulating the system by means of a Markovian model or Bayesian inference on samples. In general, quantitative model checking is based on a specification of requirements in a probabilistic temporal logic, such as PCTL, CSL, and probabilistic variants of LTL. Similarly, if Runtime Verification is preferred to off-line verification, probabilistic variants of MTL and STL have been proposed. Temporal logics allow us to express the properties of systems starting from the observation of a single computation. In the quantitative setting, uncertainties are usually dealt with by imposing probabilistic guarantees on a given property to be satisfied. This approach is natural and has found several applications, like establishing formal guarantees on reachability.

However, it does not allow us to analyse the properties of the distributions describing the transient behaviour of the system.

To specify complex system requirements that are not specific to a single computation, under uncertainties, we want to be able to characterise the distribution on data at a given time. For instance, we may want to guarantee that the distance of a self-driven vehicle from obstacles satisfies certain properties. With classic probabilistic temporal logics, like PCTL, we can verify whether the probability of the distance being in a given interval is beyond a given threshold. However, we cannot verify whether the values of such distance are always distributed according to a specific distribution, like e.g. a Gaussian over an interval, nor require the distribution of experienced failures to be over a given mean only for a short time.

Hence, we will introduce a novel temporal logic allowing us to:

  • Express requirements on system behaviour also in our purely data-driven setting.

  • Gain useful information on system behaviour, including its transient behaviour, regardless of the exact current computation.

  • Express syntactically the possible presence of perturbations in system specifications.

  • Take into account tiny variations in time of system behaviour.

In what follows, we will refer to this new logic as the Uncertain Temporal Logic (UTL).


  • Reserach question 2: Quantitative Runtime Monitoring

Model checking is a powerful automatic verification technique that is based on an exhaustive exploration of the state space of the model. Hence, it suffers from the state-space explosion problem which limits its applicability. Runtime Verification has been proposed as a lightweight verification technique that can complement model checking. It deals with the study, development, and application of those techniques that allow for checking whether a \emph{run} of a system under scrutiny satisfies a given property. In particular, Runtime Monitoring generally assumes a logic describing the desired specification requirements. From these, monitors are synthesised and instrumented to execute alongside a program to determine whether the requirements are satisfied or not. In online monitoring the system is monitored for violation at runtime and thus the verdicts can be exploited to make decisions and adjust the behaviour of programs.

To the best of our knowledge, no synthesis of monitors from logics with quantitative (probabilistic) modalities has been proposed so far. Hence, in this project we will study the synthesis of monitors from UTL specifications and use them to create an online monitoring framework. As previously outlined, one of the main concerns for the considered systems is the adaptability of programs: we want them to be able to adjust their behaviour with respect to the environment in order to fulfil their tasks. Hence, we will also study the development of a predictive model for adaptability. We plan to obtain the predictive model by integrating our framework, based on UTL and the evolution metric, with techniques from Game Theory. The idea is that the use of strategies can help us to obtain faster algorithms and predictions, while keeping guarantees on their reliability. We will then pair the predictive model up with our monitors, to analyse the adaptability of programs, thus allowing us to obtain our version of predictive monitoring. Our idea is that the monitor can pass the information on the current system behaviour to the predictive model. This will in turn use it to update the requirements on behaviour that have to be monitored. For instance, assume that the monitored formula F expresses a safety region for the system. If the monitor detects that the system is getting close to the boundaries of this region, it passes the information on the current behaviour to the predictive model. This uses this information to synthesise a formula F' that can represent a danger region, in the sense that if the system does not keep its behaviour within it then it is doomed to fail. The so obtained new requirement can then be monitored.


  • Research question 3: Collective Adaptive Systems

As previously outlined, the environment can sometimes consist of a large number of interacting programs. This is the case of the new emerging class of systems called Collective Adaptive Systems (CAS). Due to their self-organising features, CAS have found successful applications in, e.g., the development of fully decentralised smart electric grids, public transportation, shared bike systems, and engineering. In the field of CAS, programs are usually referred to as agents. These agents, when considered in isolation from the system, usually show a rather simple behaviour. However, the interaction of a massive number of them enables the desired complex behaviour of the system, seen as a common goal of all the agents. Notably, agents coordinate their activities in a decentralised and often implicit way, and are usually subject to a dynamic and unpredictable environment. More precisely, in CAS the quantitative behaviour of each agent can change dynamically in time due to either its interaction with the other agents; or to the fact that an agent can freely join or leave the system at any time; or to failures and/or conflicts that may occur in the interactions. While we can find, in the literature, several proposals of formal specification and static verification techniques for CAS, to our knowledge there has been only attempt to use a metric to analyse CAS performance. However, the proposed metric considers only spatio-temporal properties and it compares the behaviour of the agents in the systems, rather than the overall behaviour of different systems.

Hence, in this project we will tackle the study of metric analysis of CAS. In particular, we are interested in obtaining a formal tool to measure the impact of changes in the population size on system's behaviour, as well as the adaptability properties of each agent with respect to perturbations and/or failures of other system components. At the same time, we will study a game theoretic approach to the analysis of CAS and its possible combination with our "metric-based" framework.