University of Brasília (UnB)
Software Product Line Analyses: Reliability and Beyond
In this talk, we first present a scenario in the Body Sensor Network domain to motivate the development of reliable and maintainable dynamic software product lines. Next, we describe supporting process and architecture. We then define a feature-family-based reliability analysis strategy of software product lines and report on empirical results comparing this strategy to others. The abstract structure relating these analyses is then formalized, including mechanized proof of their semantic equivalence. Lastly, we present a strategy to bootstrap a formal framework of product-line analyses and then evolve it to consider other models and properties.
Israel Institute of Technology
Automated Program Repair Using Formal Verification Techniques
We focus on two different approaches to automatic program repair, based on formal verification methods. Both repair techniques consider infinite-state C-like programs, and consist of a generate-validate loop, in which potentially repair programs are repeatedly generated and verified. Both approaches are incremental -- partial information gathered in previous verification attempts -- is used in the next steps. However, the settings of both approaches and mainly, their techniques for finding repairs, are quite distinct.The first approach uses syntactic mutations to repair sequential programs with respect to assertions in the code. It is based on a reduction to the problem of finding unsatisfiable sets of constraints, which is addressed using an interplay between SAT and SMT solvers.
A novel notion of must-fault-localization enables efficient pruning of the search space, without losing any potential repairs. The second approach uses an Assume-Guarantee style reasoning in order to verify and repair large programs that are decomposed into several concurrent components. Then, the procedure repeatedly constructs and verifies one of the components, until a correct repair is found. The construction is computed via automata-learning techniques. Several different repair methods are considered, trading off preciseness and convergence to a correct repair.
A New Approach for Active Automata Learning Based on Apartness
We present L#, a new and simple approach to active automata learning. Instead of focusing on equivalence of observations, like the L∗ algorithm and its descendants, L# takes a different perspective: it tries to establish apartness, a constructive form of inequality. L# does not require auxiliary notions such as observation tables or discrimination trees, but operates directly on tree-shaped automata. L# has the same asymptotic query and symbol complexities as the best existing learning algorithms, but we show that adaptive distinguishing sequences can be naturally integrated to boost the performance of L# in practice. Experiments with a prototype implementation, written in Rust, suggest that L# outperforms existing algorithms.
Recent years have seen an astounding growth in deployment of AI systems in critical domains such as autonomous vehicles, criminal justice, healthcare, hiring, housing, human resource management, law enforcement, and public safety, where decisions taken by AI agents directly impact human lives. Consequently, there is an increasing concern if these decisions can be trusted to be correct, reliable, fair, and safe, especially under adversarial attacks. How then can we deliver on the promise of the benefits of AI but address these scenarios that have life-critical consequences for people and society? In short, how can we achieve trustworthy AI? Under the umbrella of trustworthy computing, there is a long-established framework employing formal methods and verification techniques for ensuring trust properties like reliability, security, and privacy of traditional software and hardware systems. Just as for trustworthy computing, formal verification could be an effective approach for building trust in AI-based systems. However, the set of properties needs to be extended beyond reliability, security, and privacy to include fairness, robustness, probabilistic accuracy under uncertainty, and other properties yet to be identified and defined. Further, there is a need for new property specifications and verification techniques to handle new kinds of artifacts, e.g., data distributions, probabilistic programs, and machine learning based models that may learn and adapt automatically over time. This talk will pose a new research agenda, from a formal methods perspective, for us to increase trust in AI systems.