Explainability of Real-Time Systems and Their Analysis (ERSA)
Explainability of Real-Time Systems and Their Analysis (ERSA)
4th International Workshop on Explainability of Real-time Systems and their Analysis at the IEEE Real-Time Systems Symposium (RTSS 2025) in Boston, MA, USA
Today, almost all verification techniques provide answers to questions but do not provide explanations. We will address that.
Important dates
• Submission deadline:
September 12, 2025
September 19, 2025
• Notification of acceptance:
October 10, 2025
• Final version due:
October 24, 2025
• Workshop:
December 2, 2025
Program Chairs
Bjorn Andersson,
SEI/CMU, USA
Isaac Amundson,
Collins Aerospace, USA
Program Committee
Ademola Adejokun
Lockheed Martin, USA
Al Mok,
UTexas, USA
Chi-Sheng (Daniel) Shih,
NTU, Taiwan
C. Michael Holloway,
NASA Langley, USA
Dionisio de Niz,
SEI/CMU, USA
Ganesh Pai,
KBR/NASA, USA
John Lehoczky,
SEI/CMU, USA
Kit Siu,
GE Aerospace, USA
Mark Klein,
SEI/CMU, USA
Mats Heimdahl,
UMinnesota, USA
Mauricio Castillo-Effen,
Lockheed Martin, USA
Philippa Ryan,
UYork, UK
Shige Wang,
Motional, USA
Stefan Hallerstede,
AarhusU, DK
Zamira Daw,
UStuttgart, GE
Paper submission
Workshop website: https://sites.google.com/view/ersa25
Format: Extended abstract or position papers to define the area, up to 4 pages, IEEE Manuscript Template Conference Proceedings
Online submission: https://easychair.org/conferences/?conf=ersa25
Motivation, Goal, and Topics
Background: Many software-intensive systems of current and future application domains require (or will require) approval from a certification authority before being deployed. Approval will hinge on presentation of certification evidence about software design and performance. Examples of application domains include: aircraft, medical devices, spacecraft, autonomous ground vehicles, autonomous air vehicles. Examples of current certification authorities include: Federal Aviation Administration, European Union Aviation Safety Agency, Food and Drug Administration, Office for Nuclear Regulation.
Current pain: Today, each established application domain has a set of guidance documents. These tend to be process-oriented; i.e., (i) prescribe specific techniques for analyzing, testing and developing software, (ii) prescribe how the applicant should present data to the certification authority, (iii) state high-level objectives, and (iv) state pitfalls and approaches that should be avoided. This mindset has been successful in many domains. For example, among US air carriers, the safety record today is much better than it was decades ago. Unfortunately, the approach also has some limitations including: (i) slow adoption of new techniques (ii) lack of applicability to future technology (e.g., AI) (iii) difficulties scaling to complex software and hardware (e.g., multi-core), (iv) cost and effort to maintain certification evidence is prohibitive for frequent and late changes, (v) emphasis is on applying processes (tickbox) rather than focusing on direct evidence of the safety of the software. Thus developers are unable to take full advantage of the research within the real-time systems community including recent knowledge, techniques, advances and references to papers. One key barrier is presenting the strengths, weaknesses and applicability of recent research to certification authorities and other stakeholders. This includes explaining the methods and their results to non-specialists. Thus, it is worth exploring alternatives, specifically (i) whether explainability can help, (ii) what explainability means, and (iii) how it can be achieved for real-time systems.
Goal: The goal is to understand the role, meaning, and value of explanation in critical systems—in particular real-time systems.
Past Edition: https://sites.google.com/view/ersa22, https://sites.google.com/view/ersa23, https://sites.google.com/view/ersa24
Non-Exhaustive List of Topics:
Things needing explanation (e.g., output of schedulability or WCET analysis, post-mortem/close-call trace)
Ways of explaining: Examples (e.g., Gantt chart of a schedule); Proofs (e.g., trees, unsat core); Models (as in a satisfiable assignment)
Representations of explanations: Graphical (pictures); Textual; Anthropomorphizing (min-max or forall exist can be viewed as a game); Analogies
Building explanation from output from existing tool (e.g., from a proof that is a sequence of lemmas, for each lemma generate a picture)
Performance metrics of explanation (e.g., human skill and/or effort needed to understand the explanation)
Points in a software development life cycle and persons to which explainability are most valuable
Change in certification guidance documents needed for value of explainability to accrue
Tool qualification of explainability methods; explainability as a substitute for tool qualification
Using ideas from theoretical computer science
Computational complexity issues regarding explainability (size of certificate, size of explanation related to explanation of a property versus explanation of the negation of a property, problem and its complementary problem, co-X vs X)
Arthur-Merlin protocols, zero-knowledge proofs, probabilistically checkable proofs, interactive proof systems to (i) allow applicant to preserve privacy of some information, (ii) allow certification authority to sample some evidence, (iii) allow certification authority to detect if an applicant is cheating. Of particular interest is the case where the applicant is not just claiming to have performed a computation but is also taking measurement of the “real-world” and the certification authority is interested in checking if the applicant has really taken these measurements.
Program checking to allow checking the output of complex calculation (or proof)
Application of the aforementioned ideas in less formal settings (e.g., assurance case)
Use of explanation in a challenge-response protocol between applicant and certification authority
Use of explanation as interface between human designers in different teams
Inspecting part of an explanation while maintaining some confidence in the claim that it explains
Verification procedure that creates a perturbed question from the original question and answers both questions and the perturbed question differs from the original question as little and their answers differ as much as possible; contrastive explanation
Design principles that facilitate explainability (are some schedulers, e.g. TT, easier to explain? are systems that are more “deterministic” easier to explain?)
Explainability in other domains (e.g., AI) and their use in explainability of real-time systems
Re-interpretation of previously known results (e.g., schedulability tests) in view of explainability
Explainability of tools that rely on measurements (e.g., measurement-based WCET analysis)
Explainability of real-time systems results in safety engineering
Assurances case as a form of explainability; explanation of assurance case
Explanations on how design elements satisfy requirements (tracing)
Explanation of why a property of the physical world (e.g., the state of a plant controlled by a controller) is satisfied/violated because of an event in the cyber-realm (e.g., deadline miss of control software)
Explanations that encourage human engagement and curiosity (i.e., don't dump a large volume of explanations/arguments that kill human curiosity)
If a problem comes in two variants, an optimization problem and a decision problem, can the optimization problem variant provide an explanation to the decision problem variant?
Root cause analysis