Program
VNNCS will feature presentations from a mix of authors of submitted papers and invited speakers from both industry and academia. There will also be several interactive sessions featuring poster presentations and a panel discussion to facilitate more interactive discussions.
The planned schedule for VNNCS is shown in the table below.
Location: Plaza Court 1, Sheraton Denver Downtown Hotel, Denver, CO
8:30 - 9:00 Welcome and opening remarks
9:00 - 9:30 Organizer talk: Prof. Michael Everett
9:30 - 10:00 Coffee break
10:00 - 12:00 Submitted paper presentations
10:00 - 10:15 Formal Synthesis of Lyapunov Stability Certificates for Linear Switched Systems using ReLU Neural Networks
10:15 - 10:30 Robust Tracking Control with Neural Network Dynamic Models under Input Perturbations
10:30 - 10:45 Certified Robust Invariant Polytope Training in Neural Controlled ODEs
10:45 - 11:00 Recurrent Neural Network Controller with Classification Network under Signal Temporal Logic Specifications
11:00 - 11:15 Certified Continuous Time Zubov-Guided Neural Lyapunov Control
11:15 - 11:30 Verifiable Safety Q-Filters via Hamilton-Jacobi Reachability and Multiplicative Q-Networks
11:30 - 11:45 Verifying Nonlinear Neural Feedback Systems using Polyhedral Enclosures
12:00 - 1:30 Lunch break
1:30 - 3:00 Invited talks
1:30 - 2:00 Safety Assurance in Neural Network Controlled Systems: A Mixed Monotone Approach - Prof. Saber Jafarpour
2:00 - 2:30 Predictive Runtime Verification of Learning-Enabled Systems with Conformal Prediction - Prof. Lars Lindemann
2:30 - 3:00 Toward Verifiable Human Interaction - Dr. Jonathan DeCastro
3:00 - 3:30 Coffee break
3:30 - 4:00 Invited talk: Formal verification for neural network controlled systems with the α,β-CROWN framework - Prof. Huan Zhang
4:00 - 5:00 Panel Discussion
Invited Speakers
Dr. Jonathan DeCastro
Toyota Research Institute (TRI)
Title: Toward Verifiable Human Interaction
Abstract: With the rise of learned systems enabling embodied and human-centric domains such as robotics and driving, we can no longer rely on conventional techniques to verify the correctness of such systems. While it is commonly agreed that safety guarantees must be adhered to to avoid absolute harm, the precise ways in which robots can now interact with humans are both varied and extremely nuanced, making formal verification harder to frame formally and solve for the vast majority of tasks capable of being solved today. We discuss several recent planning and control approaches proposed toward bridging this gap. We begin by examining the role that formal specifications can play in serving as a basis for learning how to undertake a task, then use the specification to plan and account for various human-aware risk factors at execution time. We then examine the role of learned inference and assistance schemes in guiding a human away from unsafe outcomes, and how our approach can encourage human-robot fluency and combat undesirable “automation surprise” brought about by conventional safety shielding techniques. Throughout the talk, we highlight our work in the context of various full- and semi-autonomous driving case studies.
Professor Huan Zhang
University of Illinois Urbana-Champaign (UIUC)
Title: Formal verification for neural network controlled systems with the α,β-CROWN framework
Abstract: As neural networks (NNs) increasingly govern safety-critical autonomous systems, ensuring formal guarantees like stability and safety remains a fundamental challenge. In this talk, I will present α,β-CROWN - an award-winning, GPU-accelerated verification framework designed to address scalability and flexibility challenges in neural network verification problems, such as the verification of NN-controlled systems. The framework is based on the principle of “linear bound propagation”, which exploits the problem structure by efficiently propagating linear inequalities on neural networks and dynamical systems. These linear inequalities can be strengthened with branch-and-bound to automatically prove various properties of a neural network controlled system with little manual derivations. I will showcase recent applications of the α,β-CROWN toolbox in control, including formal stability and contraction analysis of NN-controlled systems in both discrete and continuous-time settings.
Professor Saber Jafarpour
University of Colorado Boulder
Title: Safety Assurance in Neural Network Controlled Systems: A Mixed Monotone Approach
Abstract: In recent years, neural networks have been increasingly deployed for decision-making in autonomous systems. While they offer substantial computational advantages, ensuring the safety and reliability of these learning-enabled components remains a significant challenge due to their high dimensionality and inherent nonlinearity. In this talk, I will present theoretical and algorithmic tools for certifying the safety of closed-loop systems with neural network controllers. I study safety from a reachability perspective and establish conditions for system safety based on the system’s reachable sets. To enable scalable analysis, I leverage concepts from monotone systems theory to develop computationally efficient over-approximations of reachable sets in nonlinear dynamical systems. By integrating this reachability framework with state-of-the-art neural network verification algorithms, I introduce an efficient reachability approach that provides rigorous safety guarantees for neural network-controlled systems. A key feature of this approach is its ability to capture the stabilizing effect of the neural network controller in the overall safety analysis.
Professor Lars Lindemann
University of Southern California (USC)
Title: Predictive Runtime Verification of Learning-Enabled Systems with Conformal Prediction
Abstract: Accelerated by rapid advances in machine learning and AI, there has been tremendous success in the design of learning-enabled autonomous systems in areas such as autonomous driving, intelligent transportation, and robotics. However, these exciting developments are accompanied by new fundamental challenges that arise regarding the safety and reliability of these increasingly complex control systems in which sophisticated algorithms interact with unknown environments. In this talk, I will provide new insights and discuss exciting opportunities to address these challenges.
Imperfect learning algorithms, system unknowns, and uncertain environments require design techniques to rigorously account for uncertainties. I advocate for the use of conformal prediction (CP) — a statistical tool for uncertainty quantification — due to its simplicity, generality, and efficiency as opposed to existing optimization-based neural network verification techniques that are either conservative or not scalable. I first provide an introduction to CP for the non-expert who is interested in applying CP to address real-world engineering problems. My goal is then to show how we can use CP to solve the problem of predicting failures of learning-enabled systems during their operation. Particularly, we leverage CP and design two predictive runtime verification algorithms (an accurate and an interpretable version) that compute the probability that a high-level system specifications is violated. Finally, we will discuss how we can use robust versions of CP to deal with distribution shifts that arise when the deployed learning-enabled system is different from the system during design time.