Explainability of Real-Time Systems and Their Analysis (ERSA)
Explainability of Real-Time Systems and Their Analysis (ERSA)
4th International Workshop on Explainability of Real-time Systems and their Analysis at the IEEE Real-Time Systems Symposium (RTSS 2025) in Boston, MA, USA
Today, almost all verification techniques provide answers to questions but do not provide explanations. We will address that.
Important dates
• Submission deadline:
September 12, 2025
September 26, 2025
• Notification of acceptance:
October 10, 2025
• Final version due:
October 24, 2025
• Workshop:
December 2, 2025
Program Chairs
Bjorn Andersson,
SEI/CMU, USA
Isaac Amundson,
Collins Aerospace, USA
Program Committee
Ademola Adejokun
Lockheed Martin, USA
Al Mok,
UTexas, USA
Chi-Sheng (Daniel) Shih,
NTU, Taiwan
C. Michael Holloway,
NASA Langley, USA
Dionisio de Niz,
SEI/CMU, USA
Ganesh Pai,
KBR/NASA, USA
John Lehoczky,
SEI/CMU, USA
Kit Siu,
GE Aerospace, USA
Mark Klein,
SEI/CMU, USA
Mats Heimdahl,
UMinnesota, USA
Mauricio Castillo-Effen,
Lockheed Martin, USA
Philippa Ryan,
UYork, UK
Shige Wang,
Motional, USA
Stefan Hallerstede,
AarhusU, DK
Zamira Daw,
UStuttgart, GE
Program
Location: Boston University’s Center for Computing and Data Sciences (CCDS). Details from RTSS about the venue are here. For ERSA, the specific rooms are as follows: CCDS 548 (before noon); 1101 (after noon).
Link to workshop proceeding is here.
09:00am Welcome
Isaac Amundson and Bjorn Andersson
09:05am Keynote (Industrial)
Session chair: Isaac Amundson
Akshay Rajhans, "From GUIs to Generative AI: Explainability in the tooling for engineered system design"
Mathworks
10:20am Short announcements
Bjorn Andersson, "Retrospective of earlier ERSA"
Bjorn Andersson, "Certification authorities are starting to state explicit deadlines in guidance"
10:30am Coffee break
11:00am Peer-reviewed papers
Session chair: Isaac Amundson
Bryce Bjorkman, Bryan C. Ward, Saqib Hasan, "Towards Automated Design-Time Explainability of Assurance Cases and Trade Spaces"
Parth Ganeriwala, Candice Chambers, Siddhartha Bhattacharyya, Junaid Babar, "Explainable Assurance through Compositional Verification with Cognitive Models"
11:59am Keynote address (Academic)
Session chair: Bjorn Andersson
Stavros Tripakis, "Formalizing Specifications with Foundational Models and Consistency Checkers"
12:30pm Lunch
02:00pm Panel
Moderator: Bjorn Andersson
Akshay Rajhans, MathWorks
Stavros Tripakis, Northeastern University
Isaac Amundson, Collins Aerospace
03:00pm Adjourn
About the Keynote (Industrial)
Speaker: Akshay Rajhans
Affiliation: Mathworks
Title: From GUIs to Generative AI: Explainability in the tooling for engineered system design
Abstract: Engineered systems such as automotive, aerospace, and medical devices are built using model-based design and analysis. Tooling for model-based design, such as the MATLAB and Simulink product family, plays a key role in systems design, analysis, and operation. Explainable feedback to the users in such tools goes a long way in enabling users to have high confidence in their designs throughout this process.
After presenting some preliminaries about MATLAB and Simulink, the talk will take a specific example of the temporal assessments feature in Simulink Test relevant to the formal methods and real-time systems community. This feature allows users to capture temporal specifications that are formal by construction yet readable like English language sentences. GUI mechanisms are designed to allow users to construct such specifications without the need for knowledge of temporal logic. Clauses in tree structures entered through GUIs can be iteratively folded to make the specification increasingly more like English sentences. At edit time, textual explanations and graphical cartoons provide feedback to the user to visualize fictitious pass/fail examples. At test time, actual passing and failing test cases are accompanied by not just a pass/fail result, but relevant textual and graphical information to explain why something fails. We will also discuss some expressivity vs usability tradeoffs releasing this functionality with engineers in mind.
In the remaining time the talk will briefly cover some advanced topics around explainability in the age of AI. On the one hand, we will survey some interpretability techniques for AI algorithms and components, and on the other hand we will discuss the potential of Generative AI for explaining software code and error messages. We will conclude the talk with an outlook on challenges and opportunities.
Bio: Akshay Rajhans is Chief Research Scientist and Head of the MathWorks Advanced Research & Technology Office. He owns the responsibility for several research and technology innovation programs at MathWorks. His technical background centers around model-based design and analysis of engineered systems such as AI-enabled intelligent cyber-physical systems and he represents MathWorks in the research community in various capacities. He was a keynote speaker at the 25th International Conference on Model Driven Engineering Languages and Systems (MODELS) in 2022, 19th International Runtime Verification Conference at the World Congress on Formal Methods in 2019, and the First International Workshop on Multi-Paradigm Modeling for Cyber-Physical Systems (MPM4CPS) co-located with the MODELS Conference in 2019.
Dr. Rajhans has a Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University and an M.S. in Electrical Engineering from the University of Pennsylvania. Earlier in his career, Dr. Rajhans worked on development and application engineering of electronic control systems for diesel-engine applications at Cummins. As a research intern at Bosch, he co-invented a model-based approach to non-intrusive load monitoring. He is a Senior Member of IEEE and ACM.
About the Keynote (Academic)
Speaker: Stavros Tripakis
Affiliation: Northeastern University
Title: Formalizing Specifications with Foundational Models and Consistency Checkers
Abstract: I will present joint work with Medina Andresel, Cristinel Mateis and Dejan Ničković (AIT Austrian Institute of Technology) and Panagiotis Katsaros and Spyridon Kounoupidis (Aristotle University of Thessaloniki) on using combined LLMs and symbolic tools to translate informal requirements in English into formal specifications in LTL (linear temporal logic), to check consistency of the resulting formal specs, and to provide feedback to the user. This is ongoing work recently presented at AISoLA 2025. Time permitting, I will also discuss some personal views on the current state of AI technology and threats.
Bio: Stavros Tripakis is an associate professor in the Khoury College of Computer Sciences at Northeastern University, based in Boston.
Tripakis is interested in the foundations of software and system design. His research focuses on formal methods; computer-aided verification and synthesis; safety-critical, embedded, and cyber-physical systems; and security. He is a faculty advisor for the Formal Methods Group, which develops theories and tools to design better systems.
Tripakis has held positions at the University of California, Berkeley; the French National Research Center; Cadence Design Systems; and Aalto University. He co-chaired the 10th ACM & IEEE Conference on Embedded Software and served on the board of ACM SIGBED — first as secretary and treasurer, then as vice-chair. His h-index, a metric that measures the productivity and citation impact of a scholar’s publications, is 50.
Paper submission
Workshop website: https://sites.google.com/view/ersa25
Format: Extended abstract or position papers to define the area, up to 4 pages, IEEE Manuscript Template Conference Proceedings
Online submission: https://easychair.org/conferences/?conf=ersa25
Motivation, Goal, and Topics
Background: Many software-intensive systems of current and future application domains require (or will require) approval from a certification authority before being deployed. Approval will hinge on presentation of certification evidence about software design and performance. Examples of application domains include: aircraft, medical devices, spacecraft, autonomous ground vehicles, autonomous air vehicles. Examples of current certification authorities include: Federal Aviation Administration, European Union Aviation Safety Agency, Food and Drug Administration, Office for Nuclear Regulation.
Current pain: Today, each established application domain has a set of guidance documents. These tend to be process-oriented; i.e., (i) prescribe specific techniques for analyzing, testing and developing software, (ii) prescribe how the applicant should present data to the certification authority, (iii) state high-level objectives, and (iv) state pitfalls and approaches that should be avoided. This mindset has been successful in many domains. For example, among US air carriers, the safety record today is much better than it was decades ago. Unfortunately, the approach also has some limitations including: (i) slow adoption of new techniques (ii) lack of applicability to future technology (e.g., AI) (iii) difficulties scaling to complex software and hardware (e.g., multi-core), (iv) cost and effort to maintain certification evidence is prohibitive for frequent and late changes, (v) emphasis is on applying processes (tickbox) rather than focusing on direct evidence of the safety of the software. Thus developers are unable to take full advantage of the research within the real-time systems community including recent knowledge, techniques, advances and references to papers. One key barrier is presenting the strengths, weaknesses and applicability of recent research to certification authorities and other stakeholders. This includes explaining the methods and their results to non-specialists. Thus, it is worth exploring alternatives, specifically (i) whether explainability can help, (ii) what explainability means, and (iii) how it can be achieved for real-time systems.
Goal: The goal is to understand the role, meaning, and value of explanation in critical systems—in particular real-time systems.
Past Edition: https://sites.google.com/view/ersa22, https://sites.google.com/view/ersa23, https://sites.google.com/view/ersa24
Non-Exhaustive List of Topics:
Things needing explanation (e.g., output of schedulability or WCET analysis, post-mortem/close-call trace)
Ways of explaining: Examples (e.g., Gantt chart of a schedule); Proofs (e.g., trees, unsat core); Models (as in a satisfiable assignment)
Representations of explanations: Graphical (pictures); Textual; Anthropomorphizing (min-max or forall exist can be viewed as a game); Analogies
Building explanation from output from existing tool (e.g., from a proof that is a sequence of lemmas, for each lemma generate a picture)
Performance metrics of explanation (e.g., human skill and/or effort needed to understand the explanation)
Points in a software development life cycle and persons to which explainability are most valuable
Change in certification guidance documents needed for value of explainability to accrue
Tool qualification of explainability methods; explainability as a substitute for tool qualification
Using ideas from theoretical computer science
Computational complexity issues regarding explainability (size of certificate, size of explanation related to explanation of a property versus explanation of the negation of a property, problem and its complementary problem, co-X vs X)
Arthur-Merlin protocols, zero-knowledge proofs, probabilistically checkable proofs, interactive proof systems to (i) allow applicant to preserve privacy of some information, (ii) allow certification authority to sample some evidence, (iii) allow certification authority to detect if an applicant is cheating. Of particular interest is the case where the applicant is not just claiming to have performed a computation but is also taking measurement of the “real-world” and the certification authority is interested in checking if the applicant has really taken these measurements.
Program checking to allow checking the output of complex calculation (or proof)
Application of the aforementioned ideas in less formal settings (e.g., assurance case)
Use of explanation in a challenge-response protocol between applicant and certification authority
Use of explanation as interface between human designers in different teams
Inspecting part of an explanation while maintaining some confidence in the claim that it explains
Verification procedure that creates a perturbed question from the original question and answers both questions and the perturbed question differs from the original question as little and their answers differ as much as possible; contrastive explanation
Design principles that facilitate explainability (are some schedulers, e.g. TT, easier to explain? are systems that are more “deterministic” easier to explain?)
Explainability in other domains (e.g., AI) and their use in explainability of real-time systems
Re-interpretation of previously known results (e.g., schedulability tests) in view of explainability
Explainability of tools that rely on measurements (e.g., measurement-based WCET analysis)
Explainability of real-time systems results in safety engineering
Assurances case as a form of explainability; explanation of assurance case
Explanations on how design elements satisfy requirements (tracing)
Explanation of why a property of the physical world (e.g., the state of a plant controlled by a controller) is satisfied/violated because of an event in the cyber-realm (e.g., deadline miss of control software)
Explanations that encourage human engagement and curiosity (i.e., don't dump a large volume of explanations/arguments that kill human curiosity)
If a problem comes in two variants, an optimization problem and a decision problem, can the optimization problem variant provide an explanation to the decision problem variant?
Root cause analysis