Lectures

This year's schedule can be found here.

Rajeev Alur: Regular Functions: Foundations and Application to Quantitative Policies over Streaming Data

When should a function mapping strings to strings, or strings to numerical costs, or more generally, strings/trees/infinite-strings to a set of output values with a given set of operations, be considered regular? We have proposed a new machine model of cost register automata, a class of write-only programs, to define such a notion of regularity. The class of regular functions has appealing theoretical properties analogous to regular languages: it is closed under a variety of function transformations, analysis questions such as equivalence and type checking are decidable, and has exact logical and algebraic characterizations. In this series of lectures, I will first give an overview of theoretical results.

Then, we will focus on applying this theory to design and implementation of a policy language.

Decision making in cyber-physical systems often requires dynamic monitoring of a data stream to compute performance-related quantitative properties. We propose Quantitative Regular Expressions as a high-level declarative language for modular specifications of such quantitative policies.

This language is rooted in the theory of regular functions, and every policy described in this language can be compiled into a space-efficient streaming implementation. We describe a prototype system that is integrated within an SDN controller and show how it can be used to specify and enforce dynamic updates for traffic engineering as well as in response to security threats. We conclude by outlining the rich opportunities for both theoretical investigations and practical systems for real-time decision making in IoT applications.

Manfred Broy: An Integrated Service-Oriented Modeling Approach of Functionality and Architecture of Multifunctional Systems

Distributed interacting real time systems are the dominating paradigm in software intensive applications of today. Instances are advanced embedded systems such as found in cars, airplanes, or robots as well as distributed information systems and global networks, such as the Internet. It is essential in the development of those systems to model the following aspects:

  • system interface behavior und functional architecture,

  • assumptions about the system contexts,

  • structured views on component architecture and the interaction of the components.

We present a service-oriented approach in terms of an integrated modeling framework for interactive distributive systems with particular emphasis on requirements and architecture modeling introducing service layers and layered architecture.

Bernd Finkbeiner: Reactive Synthesis

Reactive synthesis automates the construction of reactive systems, such as hardware circuits, communication protocols, and embedded controllers. Instead of programming an implementation, the developer writes a formal specification of the desired system properties, for example in a temporal logic. The check whether the specified properties are realizable and the construction of the actual implementation is taken care of by the synthesis algorithm.

In this series of lectures, I will trace the developments triggered by Alonzo Church's original formulation of the synthesis problem almost 60 years ago [1], from the early solutions in the 1960s [2, 3] to the practical tools [4, 5, 6] that have come out in the past few years. The lectures will give an overview on the automata- and game-theoretic foundations (cf. [7]), explore the spectrum of logics for the synthesis of reactive systems, from reduced logics with simpler synthesis problems like GR(1) [8] to advanced logics such as strategy [9] and coordination logic [10], and discuss the ideas behind efficient synthesis approaches like bounded synthesis [11].

References

[1] A. Church, "Applications of recursive arithmetic to the problem of circuit synthesis," in Summaries of the Summer Institute of Symbolic Logic, vol. 1, pp. 3-50, Cornell Univ., Ithaca, NY, 1957.

[2] J. R. Buchi and L. H. Landweber, "Solving sequential conditions by .nite-state strategies," Transactions of the American Mathematical Society, vol. 138, 1969.

[3] M. O. Rabin, Automata on In.nite Objects and Church's Problem. Boston, MA, USA: American Mathematical Society, 1972.

[4] A. Bohy, V. Bruy.ere, E. Filiot, N. Jin, and J.-F. Raskin, "Acacia+, a tool for LTL synthesis," in CAV, vol. 7358 of LNCS, pp. 652-657, Springer, 2012.

[5] R. Ehlers, "Unbeast: Symbolic bounded synthesis," in TACAS, vol. 6605 of Lecture Notes in Computer Science, pp. 272-275, Springer, 2011.

[6] B. Finkbeiner, M. Gieseking, and E.-R. Olderog, "Adam: Causality-based synthesis of distributed systems," in Computer Aided Veri.cation, vol. 9206 of Lecture Notes in Computer Science, pp. 433-439, Springer, 2015.

[7] E. Gradel, W. Thomas, and T. Wilke, eds., Automata Logics, and In.nite Games: A Guide to Current Research. New York, NY, USA: Springer-Verlag New York, Inc., 2002.

[8] N. Piterman, A. Pnueli, and Y. Sa'ar, "Synthesis of reactive(1) designs," in Veri.cation, Model Checking, and Abstract Interpretation, vol. 3855 of Lecture Notes in Computer Science, pp. 364-380, Springer Berlin Heidelberg, 2006.

[9] K. Chatterjee, T. A. Henzinger, and N. Piterman, "Strategy logic," Inf. Comput., vol. 208, no. 6, pp. 677-693, 2010.

[10] B. Finkbeiner and S. Schewe, "Coordination logic," in Computer Science Logic, vol. 6247 of Lecture Notes in Computer Science, pp. 305-319, Springer, 2010.

[11] B. Finkbeiner and S. Schewe, "Bounded synthesis," International Journal on Software Tools for Technology Transfer, vol. 15, no. 5-6, pp. 519-539, 2013.

Radu Grosu: Logical Foundations of Cyber-Physical Systems

Aim: The technological developments of the past two decades have nurtured a fascinating convergence of computer science and electrical, mechanical and biological engineering. Nowadays, computer scientists work hand in hand with engineers to model, analyze and control complex systems, that exhibit discrete as well as continuous behavior. Examples of such systems include automated highway systems, air traffic management, automotive controllers, robotics and real-time circuits. They also include biological systems, such as immune response, bio-molecular networks, gene- regulatory networks, protein-signaling pathways and metabolic processes.

The more pervasive and more complex these systems become, the more is the infrastructure of our modern society relying on their dependability. Traditionally however, the modeling, analysis and control theory of discrete systems is quite different from the one of continuous systems. The first is based on automata theory, a branch of discrete mathematics, where time is typically abstracted away. The second is based on linear systems theory, of differential (or difference) equations, a branch of continuous mathematics where time is of essence. This course is focused on the principles underlying their combination. By the end of this course the students will be provided with detailed knowledge and substantial experience in the mathematical modeling, analysis and control of hybrid systems.

Subject: Hybrid automata as discrete-continuous models of hybrid systems. Executions and traces of hybrid automata. Infinite transition systems as a time-abstract semantics of hybrid automata. Finite abstractions of infinite transition systems. Language inclusion and language equivalence. Simulation and bisimulation. Quotient structures. Approximate notions of inclusion and simulation. State logics, and model checking. Partition-refinement and model checking within mu-calculus. Classes of hybrid automata for which the model-checking problem is decidable. Modern overapproximation techniques for reachability analysis.

Connie Heitmeyer: Formal Requirements Models for System Safety and Security

To capture software system requirements, one must elicit and analyze the required externally visible behavior of the system to be built within some physical context. Moreover, the system requirements should be modeled and specified in terms of required system services and environmental assumptions. System requirements may be expressed in two different ways—as properties of the externally visible system behavior or as an operational system model. In either case, the system requirements are represented in terms of quantities in the system environment. Obtaining consistent, complete, and correct requirements models is difficult but extremely critical. Incorrect and incomplete requirements have been repeatedly identified as a major cause of project cost overruns, delivery delays, and failure to meet users’ expectations of the system.

This series of lectures provides a systematic formal approach to the modeling, specification, validation, and verification of requirements for high assurance systems—systems where compelling evidence is needed that the system satisfies critical properties, such as security, safety, timing, and fault-tolerance properties. The objective of this approach for a given system is a complete, consistent, correct, and properly organized set of software requirements, including environmental assumptions. The approach is model-based and relies in part on the use of formal specifications for specific kinds of analyses.

The course first introduces basic principles of high quality requirements modeling and specification. Key notions such as “requirements”, “model”, “specification”, “environmental assumption”, “monitored” and “controlled variables”, and “system mode” are defined and related to one other. A modeling framework is then introduced in the specific context of requirements for large, complex software systems. Different kinds of models are used to demonstrate and analyze various properties of the system-to-be: among these are requirements models, simulation models, verification models, security models, fault-tolerance models, and testing models. These models are designed for different purposes. For example, the role of a verification model is to support automated checking that an operational model of the system requirements satisfies critical properties. In contrast, the role of a simulation model (or an animation) is to help the user validate the requirements specification—i.e., to ensure that symbolic execution of the system using a simulator captures the intended system behavior.

Models need to be formalized to enable formal reasoning about them. Various specification languages for specifying requirements models are reviewed. Chief among these languages are table-based notations such as Software Cost Reduction (SCR), which allow a user to express the required system behavior as a set of tables. Other common notations are graphical (for example, finite state diagrams) and hybrid (for example, Leveson and Heimdahl’s Requirements State Machine Language combines tables and a Statecharts-like notation).

The course will then discuss various techniques supporting the formulation of high-quality requirements, the analysis and validation of requirements to improve their quality, and the use of requirements in obtaining correct system code. Among the techniques covered are

  • Modeling and formal specification of requirements

  • Consistency and completeness checking of requirements

  • Simulation of requirements to check their validity

  • Generating invariants from requirements specifications

  • Formal verification of requirements

  • Testing and automatic code generation based on an operational requirements model

  • Modeling and analyzing systems for critical properties (e.g., security and safety)

The presentations will be illustrated through representative examples and tool demonstrations. A number of examples illustrating the utility of the methods and tools for practical systems will be presented.

References

Constance Heitmeyer, Ralph Jeffords, and Bruce Labaw, “Automated consistency checking of requirements specifications, ” ACM Trans. on Software Engineering and Methodology, July 1996.

Constance Heitmeyer, James Kirby, Bruce Labaw, Myla Archer, and Ramesh Bharadwaj. “Using abstraction and model checking to detect safety violations in requirements specifications” IEEE Trans. Software Eng. 24(11), Special Issue, Managing Inconsistency in Software Development: Nov. 1998.

Ralph Jeffords and Constance Heitmeyer: “Automatic generation of state invariants from requirements specifications,” Proc. ACM SIGSOFT International Symp. on Foundations of Software Engineering. November 3-5, 1998, Lake Buena Vista, FL, USA. ACM, 1998.

Angelo Gargantini and Constance L. Heitmeyer: Using Model Checking to Generate Tests from Requirements Specifications. Proc., 7th Eur. Software Engineering Conf., Held Jointly with the 7th ACM SIGSOFT Symp. on Foundations of Software Engineering (ESEC/FSE'99). Toulouse, France, Lecture Notes in Computer Science 1687, 1999.

Constance L. Heitmeyer, Myla Archer, Ramesh Bharadwaj, and Ralph D. Jeffords: Tools for constructing requirements specifications: The SCR toolset at the age of ten. Computer Systems Science and Engineering 20(1), Special issue on Automated Tools for Requirements Engineering, January 2005.

Constance Heitmeyer, Ralph Jeffords, "Applying a formal requirements method to three NASA systems: Lessons learned," Proceedings, 2007 IEEE Aerospace Conference, Big Sky, MT, March 2007.

Constance L. Heitmeyer, Myla Archer, Elizabeth I. Leonard, and John McLean: Applying formal methods to a certifiably secure software system. IEEE Trans. Software Eng. 34(1): 82-98, January 2008.

Ralph D. Jeffords, Constance L. Heitmeyer, Myla Archer, and Elizabeth I. Leonard: Model-based construction and verification of critical systems using composition and partial refinement. Formal Methods in System Design 37(2-3), 2010.

Myla Archer, Elizabeth Leonard and Constance Heitmeyer: Idea: Writing secure C programs with SecProve. Engineering Secure Software and Systems (ESSOS) Symposium. Lecture Notes in Computer Science 7781. Paris, 2013.

Constance Heitmeyer, Marc Pickett, et al.: Building high assurance human-centric decision systems. Automated Software Engineering 22 (2), 2015.

Constance Heitmeyer and Elizabeth Leonard: Obtaining trust in autonomous systems: Tools for formal model synthesis and validation. ICSE FormaliSE Workshop, 2015.

Kim Larsen: From Timed Automata to Stochastic Hybrid Games -- Model Checking, Synthesis, Performance Analysis and Machine Learning

Timed automata and games, priced timed automata and energy automata have emerged as useful formalisms for modeling real-time and energy-aware systems as found in several embedded and cyber-physical systems. During the last 20 years the real-time model checker UPPAAL has been developed allowing for efficient verification of hard timing constraints of timed automata. Moreover a number of significant branches exists, e.g. UPPAAL CORA providing efficient support for optimization, and UPPAAL TIGA allowing for automatic synthesis of strategies for given safety and liveness objectives. Also the branch UPPAAL SMC, provides a highly scalable new engine supporting (distributed) statistical model checking of stochastic hybrid automata, and most recently the new branch UPPAAL STRATEGO supporting safe and optimal strategies for stochastic hybrid games by combining symbolic methods with machine learning.

The lecture will review the various branches of UPPAAL and their concerted applications to a range of real-time and cyber-physical examples including schedulability and performance evaluation of mixed criticality systems, modeling and analysis of biological systems, energy-aware wireless sensor networks, smart grids and smart houses and battery scheduling.

Peter Müller: Modular Program Verification

Modular program verification constructs a correctness proof for each program component in isolation, without considering its clients or the implementations of the components it builds on. Modularity makes verification scalable, allows one to give correctness guarantees for individual components such as libraries, and reduces the re-verification effort required by program changes.

Modular verification is especially difficult for heap-manipulating programs, where components may interact in subtle ways through shared memory, and for concurrent programs, where thread interleavings complicate reasoning.

In this series of lectures, I will present a verification approach that addresses these challenges using a notion of access permissions for memory locations. I will introduce a permission-based program logic (similar to separation logic) and explain how to automate verification in this logic using standard SMT solvers. In particular, I will present the Viper verification infrastructure, which allows one to express verification problems in an intermediate language and provides several back-ends that automate the verification effort.

Doron Peled: Model checking and runtime verification

Automatic verification is used extensively to detect design and programming errors. We will look at various model checking techniques: from explicit states techniques, based ongraph algorithms and automata theory, to methods based on efficient SAT solvers. Alternative methods and heuristics serve applications with different characteristics, e.g., software and hardware systems. In addition, we will look at runtime verification techniques that inspect the progress of executions as they unfold.

Alexander Pretschner: Defect-Based Testing

Intuition and text books suggest that good tests are those that unveil faults. Hypothesizing that there are correct programs, this means that there are no good tests for correct programs. A modified definition of good test cases then demands that tests unveil potential faults, and furthermore do so with good cost effectiveness. In this set of lectures, we study the theoretical [e.g., 1,2] and empirical [e.g., 3,4,5] defect detection ability of partition-based testing, specifically coverage-based testing, both when used for test case selection and test case assessment. We then present a theory of defect-based testing [6,7] and show how faults can be de ned as a syntactic transformation and/or a characterization of the failure domain of a program. To show the applicability of this model, we show how this theory captures defect models including simple division-by-zero or stuck-at-1 faults, but also limit testing, testing strategies for finite state machines, testing strategies for object-oriented software, performance as well as security testing, common concurrency faults, and combinatorial testing. By means of examples, we show how to operationalize these defect models for test case generation in the context of security testing [8] and testing embedded systems. Finally, we discuss the usefulness of fault models for the derivation of check lists and present a live

cycle model for defect models.

References

[1] E.J. Weyuker, B. Jeng. Analyzing Partition Testing Strategies. IEEE TSE, 17(7), pp. 703-711; 1991.

[2] W.J. Gutjahr. Partition Testing vs. Random Testing: The Influence of Uncertainty. IEEE TSE, 25(5), pp. 661-674, 1999

[3] Y.K. Malaiya, M.N. Li, J.M. Bieman, R. Karcich. Software Reliability Growth with Test Coverage. IEEE Transactions on Reliability, 51(4); pp. 420-426; 2002.

[4] A. Mockus, N. Nagappan, T.T. Dinh-Trong. Test Coverage and Post-Verification Defects: A Multiple Case Study. In: Procs. 3rd International Symposium on Empirical Software Engineering and Measurement; pp. 291-301; 2009.

[5] L. Inozemtseva, R. Holmes: Coverage is not Strongly Correlated with Test Suite Effectiveness. Procs. ICSE, 2014.

[6] L.J. Morell.A Theory of Fault-Based Testing. IEEE TSE ,16(8); pp.844-857, 1990.

[7] A. Pretschner, D. Holling, R. Eschbach, M. Gemmar. A Generic Fault Model for Quality Assurance . In: Procs. MODELS'13; pp. 87-103; 2013.

[8] M. Büchler, J. Oudinet, A. Pretschner. Semi-Automatic Security Testing of Web Applications from a Secure Model. In: Procs. 6th IEEE Intl. Conf. on Software Security and Reliability; pp.

253-262; 2012.

Grigore Rosu: K -- a semantic framework for programming languages and formal analysis tools

K is a rewrite-based executable semantic framework in which programming languages, type systems and formal analysis tools can be defined using configurations, computations and rules. Configurations organize the state in units called cells, which are labeled and can be nested. Computations carry computational meaning as special nested list structures sequentializing computational tasks, such as fragments of program. Computations extend the original language abstract syntax. K (rewrite) rules make it explicit which parts of the term they read-only, write-only, read-write, or do not care about. This makes K suitable for defining truly concurrent languages even in the presence of sharing. Computations are like any other terms in a rewriting environment: they can be matched, moved from one place to another, modified, or deleted. This makes K suitable for defining control-intensive features such as abrupt termination, exceptions or call/cc. Several real languages have been defined in K, such as C (ISO C11 standard), Java (1.4), JavaScript (ES5), Python, Scheme, Verilog, and dozens of prototypical or classroom ones. The ISO C11 semantics and a fast OCAML backend for K power RV-Match (https://runtimeverification.com/match), one of the most advanced commercial automated analysis tools for C.

The lessons will teach attendees how to define a language or a type system in K, and then how to automatically obtain an executable model of the defined language or system which is amenable to formal analysis. Major real language semantics defined in K will also be discussed, as well as commercial formal analysis tools based on them, such as the the RV-Match ISO C11 undefinedness checker (https://runtimeverification.com/match/).

John Rushby: Assurance Cases for Dependable Software Systems

There are uncertainties in the world, so we cannot claim that a dependable system will be perfect; instead we claim that its serious failures will be very rare. For example, the top-level claim for critical airplane software is that catastrophic failure conditions are not expected to occur in the operational lifetime of all airplanes of the type concerned. This can be interpreted as catastrophic failure rate less than 1 in a billion hours.

We cannot experimentally validate such tiny probabilities during development. Instead, we attempt to eliminate faults using good engineering practice, testing, and formal methods. What is the relationship between failure rates and attempted elimination of faults? And how do we "build the case" that our software is dependable?

My lectures will focus on the topic of an "Assurance Case," where a structured argument is used to justify claims about system dependability, based on evidence about its design, implementation, and behavior. This will involve consideration of evidence and argumentation, runtime monitoring, trust in tools, and related topics.

Along the way I will introduce interactive theorem proving (assuming no one else does so) and will conclude with future challenges, such as assurance for automated driving.

Mark Ryan: Security-focussed protocols, and verification challenges

The lectures will consider examples of security-focussed protocols, such as those used in internet-based voting, cryptocurrencies (like Bitcoin), secure messaging, and possibly also the internet of things.

The design and intended properties of these protocols will be discussed, along with challenges for verifying them.