My research project lean towards the intersection between formal verification and security testing. On one hand, modeling systems allows the verification of universal properties, such as security properties like confidentiality or authentication, thanks to model-checking. Such models may help in discovering complex attacks, which are hard to find by a direct analysis on the system. However, when the model becomes very large, an exhaustive verification is not anymore possible. Random exploration based on enumerative combinatoric can provide a probabilistic analysis of these models.
On the other hand, discovered attacks on these abstract models must be concretized before executing them on the System Under Test (SUT). Fault models provide a way to insert the knowledge of an expert in penetration testing (pentest) into the model-based testing tools in order to exploit potential vulnerabilities of the SUT.
Model-checking aims at verifying if properties hold on models. In the case of approximate explicit model-checking, models are represented by graphs and it is generally enough to generate a finite number of paths to verify if a property holds with a certain probability.
Formal model-based testing aims at getting, in a automatic fashion, a subset of every possible test that satisfies a coverage criterion. This subset is called a test suite and the coverage criterion is defined from the formal specification of the system under test, which can be represented by a graphical view such as an automaton or a labeled transition system (LTS). Most known coverage criteria are: vertex coverage, every vertex has been explored at least once after the execution of the test suite; edge coverage, every edge of the graph has been traversed; all paths coverage, which is generally impossible to satisfy or too time consuming, thus a weaker criterion based on a subset of the paths is used.
My research is based on probabilistic explorations of such models while ensuring a good coverage of the desired criterion, whatever the model topology. To do that, I use combinatorial techniques to count the number of elements to be covered and guide the system exploration in order to maximize the minimal probability of covering one of those elements.
Contributions of my thesis are the study and development of variations, using less memory, of the recursive method for counting and drawing uniformly at random [Wilf, 1977; Flajolet et al., 1994]. These variants allow to generate long paths in large automata. The idea is to combine the use of a floating-point arithmetic with a divide-to-conquer approach in order to obtain a significant gain in time and memory while keeping a good precision [Oudinet et al., 2012]. Another major contribution is to take into account partial order reduction. It dramatically reduces the complexity of the exploration of very large systems, represented by concurrent smaller components [Oudinet, 2007; Gaudel et al., 2008; Denise et al., 2012].
Finally, I am also interested in other coverage criteria related to specific needs. Thus, Grosu and Smolka (2005) show that verifying LTL formula can be seen as looking for lassos (i.e., an elementary path followed by an elementary cycle) in a Büchi automaton, which corresponds to the product between the system and the negation of the formula. An accepting lasso (i.e., that contains a final state in the cycle) in this automaton is a counter-example for the (liveness) formula. Their search algorithm for lassos is based on isotropic random walks, which are non-adapted when the automaton topology is not regular. In [Oudinet et al., 2011], we show how to maximize the minimal probability of covering a lasso and how to increase the chances of finding an accepting lasso, whatever the topologie of the underlying graph. In the case of criteria non-based on paths, I have defined a multi-objective approach, which give to the user the possibility to weigh the maximization of the minimal probability to cover an element against the minimal probability of a path that go through one element to be covered.
Thus, in my thesis [Oudinet, 2010], I developed the following points:
First, I gave a brief state-of-the-art of model-checking and model-based testing and limitations of random walks.
Then, I compared the efficiency of techniques to randomly cover paths (including the new ones I invented). I stated how to combine partial order reduction and uniform random generation in order to dramatically reduce the space complexity of the exploration.
Finally, I studied how to biased the previous methods to satisfy other coverage criteria; it results to an efficient method to generate lassos in reducible flow graphs and a multi-objective approach to cover criteria non-based on paths.
I also developed Rukia, a C++ library in which most of the previous algorithms are included. This library is a free software certified by the French agency for the protection of programs (APP).
www.spacios.eu, 6M€). The project aims at testing security protocols for Web services. It follows another EU project, AVANTSSAR, which focused on verifying protocols thanks to dedicated model-checkers for security properties.
The idea is to reuse tools for analysis the system at production time in order to test the system after it has been deployed and consumed.
Thus, the goal is to test the security of Web services starting from models of their specification. We propose the following methodology: (i) inject well-known vulnerabilities in the model; (ii) use a model-checker to find potential attacks that exploit the injected vulnerabilities in order to violate a defined security property; (iii) if so, execute these attacks on the SUT to see if it suffers from the same vulnerabilities [Büchler et al., 2011, 2012].
When Alexander Pretschner moved to TU Munich, he asked me to follow him to pursue my work on this project. Since the fault injection is done at the model level, by applying semantic changes to the original model, we have also compared the interest of such semantic mutation with respect to a more classical approach that does syntactic changes only [Oudinet et al., 2013].