Explainability of Real-Time Systems and their Analysis (ERSA)
1st International Workshop on Explainability of Real-time Systems and their Analysis at the IEEE Real-Time Systems Symposium (RTSS 2022) in Houston, USA
Today, almost all verification techniques provide answers to questions but do not provide explanations. We will address that.
Important dates
Submission deadline: September 12, 2022
Notification of acceptance: October 10, 2022
Final version due: October 24, 2022
Workshop: December 5, 2022
Program Chair
Bjorn Andersson, SEI/CMU, USA
Program Committee
Ahlem Mifdaoui, UToulouse, France
Al Mok, UTexas, USA
Anaïs Finzi, TTTech, Austria
Björn Brandenburg, MPI-SWS, Germany
Dakshina Dasari, Bosch, Germany
Hyoseung Kim, UCR, USA
Insup Lee, UPenn, USA
James Anderson, UNC, USA
John Goodenough, SEI/CMU, USA
John Lehoczky, CMU, USA
John Rushby, SRI, USA
Jonathan Preston, LMCO, USA
Mark Klein, SEI/CMU, USA
Raj Rajkumar, CMU, USA
Sanjoy Baruah, WUSTL, USA
Shige Wang, GM, USA
Stefan Mitsch, CMU, USA
Wang Yi, Uppsala, Sweden
Program
Location: Room Sam Rayburn in DoubleTree by Hilton Hotel Houston – Greenway Plaza
Link to workshop proceeding is here
08:40am Welcome
Bjorn Andersson
08:50am Kick-off / introduction
Bjorn Andersson, "The case for explainability of real-time systems"
09:20am Session 1: Certificate as explanation
Sanjoy Baruah and Pontus Ekberg, "Certificates of Real-Time Schedulability"
(Virtual) Raffaele Romagnoli, "Understanding Safety of Linear Real-Time Systems from Lyapunov Theory and Quadratic Boundedness"
10:00am Coffee break
10:30am Session 2: Formal methods with explanation
(Virtual) Stefan Mitsch, "Formal Artifacts as Explanations for System Correctness in Cyber-Physical Systems"
(Virtual) Ruben Martins, "Towards Explainable Formal Verification"
11:10am Invited paper
(Virtual) Hongrui Zheng, Zirui Zang, Shuo Yang, Rahul Mangharam, "Towards Explainability in Modular Autonomous Vehicle Software"
11:30am Panel
Björn Brandenburg, Wang Yi, Renato Mancuso, Ademola (Peter) Adejokun.
What are the challenges in explainability? What is your position on explainability? What role should explainability play? Where in the software development life-cycle (and for whom) does explainability add most value?
11:59am Lunch
01:00pm Section 3: Security, real-time, and explainability
Giann Nandi, David Pereira, José Proença, José Santos, Lourenço Rodrigues, André Lourenço and Eduardo Tovar, "MARS: a toolset for the safe and secure deployment of heterogeneous distributed systems"
(Virtual) Elena Troubitsyna, "Towards a Systematic Analysis of Timing Aspect of Safety-Security Interactions"
Sahiti Bommareddy, Maher Khan, David J Sebastian Cardenas, Carl Miller, Christopher Bonebrake, Yair Amir and Amy Babay, "Real-Time Byzantine Resilient Power Grid Infrastructure: Evaluation and Trade-offs"
02:00pm Related efforts
02:05pm What is next?
03:00pm Closing
Paper submission
News: Submission deadline extended from September 5, 2022 to September 12, 2022
Workshop website: https://sites.google.com/view/ersa22
Format: Extended abstract or position papers to define the area, up to 4 pages, IEEE Manuscript Template Conference Proceedings
Online submission: https://easychair.org/conferences/?conf=ersa22
Motivation, Goal, and Topics
Background: Many software-intensive systems of current and future application domains require (or will require) approval from a certification authority before being deployed. Examples of such application domains include: aircraft, medical devices, spacecraft, autonomous ground vehicles, autonomous air vehicles. Examples of current certification authorities include: Federal Aviation Administration, European Union Aviation Safety Agency, Food and Drug Administration.
Current pain: Today, each established application domain has a set of guidance documents. These tend to be process-oriented; i.e., (i) prescribe how the development of the system should proceed, (ii) how the applicant (the organization that develops the system) should communicate with the certification authority, (iii) state high-level objectives, and (iv) state pitfalls that should be avoided. This mindset has been successful in many domains. For example, among US air carriers, the safety record today is much better than it was decades ago. Unfortunately, this mindset also has some limitations. These include: (i) limitations for future application domains, (ii) limitations on permitting frequent and late changes, (iii) limitations on being process-driven rather than focusing on direct evidence of the safety of the software, and (iv) not taking full advantage of the research within the real-time systems research community: the knowledge of the real-time systems community is not present in these documents and these documents do not cite papers from the real-time systems research community. Achieving safety through extensive testing appears to be problematic because it precludes frequent and late changes. Achieving safety through models fed into verification techniques requires tool qualification. Thus, it is worth exploring alternatives, specifically exploring (i) whether explainability can help, (ii) what explainability means, and (iii) how it can be achieved for real-time systems.
Goal: The goal of this workshop is to understand the role, meaning, and value of explanation in critical systems—in particular real-time systems.
Non-Exhaustive List of Topics:
Things needing explanation (e.g., output of schedulability or WCET analysis, post-mortem/close-call trace)
Ways of explaining: Examples (e.g., Gantt chart of a schedule); Proofs (e.g., trees, unsat core); Models (as in a satisfiable assignment)
Representations of explanations: Graphical (pictures); Textual; Anthropomorphizing (min-max or forall exist can be viewed as a game); Analogies
Building explanation from output from existing tool (e.g., from a proof that is a sequence of lemmas, for each lemma generate a picture)
Performance metrics of explanation (e.g., human skill and/or effort needed to understand the explanation)
Points in a software development life cycle and persons to which explainability are most valuable
Change in certification guidance documents needed for value of explainability to accrue
Using ideas from theoretical computer science
Computational complexity issues regarding explainability (size of certificate, size of explanation related to explanation of a property versus explanation of the negation of a property, problem and its complementary problem, co-X vs X)
Arthur-Merlin protocols, zero-knowledge proofs, probabilistically checkable proofs, interactive proof systems to (i) allow applicant to preserve privacy of some information, (ii) allow certification authority to sample some evidence, (iii) allow certification authority to detect if an applicant is cheating. Of particular interest is the case where the applicant is not just claiming to have performed a computation but is also taking measurement of the “real-world” and the certification authority is interested in checking if the applicant has really taken these measurements.
Program checking to allow checking the output of complex calculation (or proof)
Application of the aforementioned ideas in less formal settings (e.g., assurance case)
Use of explanation in a challenge-response protocol between applicant and certification authority
Use of explanation as interface between human designers in different teams
Inspecting part of an explanation while maintaining some confidence in the claim that it explains
Verification procedure that creates a perturbed question from the original question and answers both questions and the perturbed question differs from the original question as little and their answers differ as much as possible
Design principles that facilitate explainability (are some schedulers, e.g. TT, easier to explain? are systems that are more “deterministic” easier to explain?)
Explainability in other domains (e.g., AI) and their use in explainability of real-time systems
Re-interpretation of previously known results (e.g., schedulability tests) in view of explainability
Explainability of tools that rely on measurements (e.g., measurement-based WCET analysis)
Explainability of real-time systems results in safety engineering
Assurances case as a form of explainability
Explanations on how design elements satisfy requirements (tracing)
Explanation of why a property of the physical world (e.g., the state of a plant controlled by a controller) is satisfied/violated because of an event in the cyber-realm (e.g., deadline miss of control software)
Explanations that encourage human engagement and curiosity (i.e., don't dump a large volume of explanations/arguments that kill human curiosity)
If a problem comes in two variants, an optimization problem and a decision problem, can the optimization problem variant provide an explanation to the decision problem variant?
Root cause analysis
Contrastive explanation