Speakers

Clark Barrett (Stanford University, USA)

Towards Formal Verification of Deep Neural Networks

Deep neural networks are revolutionizing the way complex systems are designed. Consequently, there is a pressing need for tools and techniques for network analysis and certification. To help in addressing that need, we present Marabou, a framework for verifying deep neural networks. Marabou is an SMT-based tool that can answer queries about a network’s properties by transforming these queries into constraint satisfaction problems. It can accommodate networks with different activation functions and topologies, and it performs high-level reasoning on the network that can curtail the search space and improve performance. It also supports parallel execution to further enhance scalability.

Kamalika Chaudhuri (University of California, San Diego, USA)

A Closer Look at Adversarial Robustness for Well-Separated Data

Adversarial examples are small imperceptible perturbations to legitimate test inputs that cause machine learning classifiers to misclassify. While recent work has proposed many attacks and defenses, why exactly they arise still remains a mystery. In this talk, we'll take a closer look at this question in the context of two types of classifiers. First, we'll look at non-parametric methods, and see that some are inherently robust but not others. If time permits, we'll then take a closer look at neural networks, and show that the robustness-accuracy tradeoff that we see in standard image datasets is not inherent, but entirely a consequence of suboptimal training algorithms.

Krishnamurthy Dvijotham (DeepMind, UK)

Generalized black-box verification via randomized smoothing

Techniques for verifying neural networks have rapidly improved in recent history. However, scaling these techniques to different neural network architectures and specifications remains challenging, with each new architecture and specification often requiring the development of novel verification techniques. At the same time, deep learning research is progressing at an even faster pace with new architectures being proposed that routinely outperform baselines on several machine learning tasks. This calls for the development of verification techniques that are agnostic to the details of the network being verified. In recent work, randomized smoothing was proposed as a technique capable of making models robust to perturbations of the input bounded in the l2 norm, by smoothing the outputs of an arbitrary black-box baseline classifier over several random Gaussian perturbations of the input. In this work, we generalize randomized smoothing to accommodate arbitrary smoothing distributions, enabling certification of robustness to several different classes of input perturbations and improved robustness certificates for smoothed probabilistic classifiers. I will present theoretical properties of this smoothing framework and describe applications to robustness verification of image and audio classification models.

Suman Jana (Columbia University, USA)

Training Verifiably Robust Malware Classifiers

Although state-of-the-art malware classifiers can easily achieve almost perfect test accuracy and extremely low false-positive rate, it has been shown that even a simple adversary can evade them. A practically useful malware classifier must be robust against evasion attacks. However, achieving such robustness is an extremely challenging task. In this talk, I am going to describe some recent work from our group towards training robust PDF malware classifiers with verifiable robustness properties. For instance, a robustness property can enforce that no matter how many pages from benign documents are inserted into a PDF malware, the classifier must still classify it as malicious. We demonstrate how the worst-case behavior of a malware classifier with respect to specific robustness properties can be formally verified. Furthermore, our results indicate that training classifiers that satisfy formally verified robustness properties can increase the evasion cost of unbounded (i.e., not bounded by the robustness properties) attackers by eliminating simple evasion attacks.

Nils Jansen (Radboud University Nijmegen, The Netherland)

Planning under Uncertainty – AI Safety via Formal Verification

The subject of this talk are planning problems where agents operate inside environments that are subject to uncertainties and partial observability. Such problems are naturally modelled by partially observable MDPs (POMDPs). The goal is to compute a policy for an agent that is guaranteed to satisfy certain safety or performance specifications. We discuss two concrete problems. First, we show how to leverage machine learning, in particular recurrent neural networks, to efficiently synthesize provably correct strategies. Second, we discuss settings where an agent operates under information limitation due to imperfect knowledge about the accuracy of its sensors. In the underlying models, probabilities are not exactly known but part of so-called uncertainty sets.

Mykel Kochenderfer (Stanford University, USA)

Safety validation through combining adaptive stress testing with neural network verification tools

Neural networks have become state-of-the-art for computer vision problems because of their ability to efficiently model complex functions from large amounts of data. While neural networks perform well empirically for a variety of tasks, their performance is difficult to guarantee. Recent neural network verification tools can certify robustness with respect to a given input image; however, for neural network systems used in closed-loop controllers, robustness with respect to individual images does not address multi-step safety properties of the neural network controller and its environment. Neural network systems interacting in the physical world through natural images are operating in a black-box environment, making formal verification intractable. This work combines the adaptive stress testing (AST) framework with the neural network verification tool Marabou to search for the most likely sequence of image disturbances that cause the neural network controlled system to reach a failure. An autonomous aircraft taxi application is used to demonstrate the AST method, which finds sequences of image disturbances that cause the neural network to guide the aircraft off the taxiway. Further analysis of the AST results revealed explainable causes of the failure, giving insight into the scenarios most vulnerable to adversarial noise.

Andreas Krause (ETH, Switzerland)

Towards Provably Safe Reinforcement Learning

While we have seen remarkable breakthroughs in reinforcement learning in recent years, significant challenges remain before these approaches can be safely deployed in high-stakes real world domains. In my talk, I will present recent approaches towards model-based RL which retain reliability guarantees such as stability. The techniques rely on concentration bounds for nonparametric Gaussian process models, and integrate them with methods from robust optimization and control, as well as formal verification. Crucially, since the bounds are data-driven, safe exploration techniques enable improved performance over time.

Changliu Liu (Carnegie Mellon University, USA)

Run-time verification of deep neural networks

Deep neural networks are widely used for nonlinear function approximation with applications ranging from computer vision to control. Although these networks involve the composition of simple arithmetic operations, it can be very challenging to verify whether a particular network satisfies certain input-output properties. In this talk, we will first provide an overview of existing methods for soundly verifying such properties. These methods borrow insights from reachability analysis, optimization, and search. Then we will discuss potential approaches to apply these methods for sound run-time verification. Run-time verification is critical to ensure the safety of real-time systems with neural network components, especially when the safety specifications, the network parameters, or the input distributions are changing from time to time. We will demonstrate an incremental approach for run-time verification on a feedforward neural network for human motion prediction.

Dorsa Sadigh (Stanford University, USA)

What Happens to Safety When Humans are not Optimal?

Safe AI and autonomy usually relies heavily on modeling assumptions. For example, the safety of an autonomous vehicle is highly dependent on what it assumes about other vehicles on the road and how accurate those models are. On one hand, one can rely on an adversarial model of other vehicles, resulting in very safe but conservative strategies. On the other hand, one can provide strong safety guarantees under limiting assumptions about other agents. To address these challenges, there has been significant advances in building computational models of humans when interacting or operating autonomous and intelligent systems. Some of today’s robots model humans as optimal and rational decision-makers. Other robots account for human limitations, and relax this assumption so that the human is modeled as noisily rational. Both of these models make sense when the human receives deterministic rewards. But in real world scenarios, rewards are rarely deterministic. Instead, we must make choices subject to risk and uncertainty — and in these settings, humans exhibit a cognitive bias towards suboptimal behavior.

In this talk, I will discuss how we can model suboptimal human behavior in risky scenarios in near-end of the spectrum using ideas from prospect theory and behavioral economics. We demonstrate our risk-aware models capture human suboptimalities that cannot be captured by noisily rational human models, and discuss how these models can be used by robots that can act safely and efficiently in both risky and non-risky settings.

James Lopez (GE Research, USA)

Ensuring Safe Operation of AI for Unmanned Aircraft Systems

GE Research in collaboration with the Stanford Center for AI Safety is developing techniques to ensure the safe operation of unmanned aircraft systems (UAS) especially when AI-based algorithms are involved in safety-critical functions of a UAS operation. One such critical UAS function is autonomous detect and avoid (DAA). In 2018 GE Research was the first to integrate and flight test a pre-release version of the ACAS sXu collision avoidance algorithm. That algorithm used a look up table (LUT) version of ACAS sXu which required 3 GB of RAM. Stanford has trained a Deep Neural Network (DNN) to implement the complex input-output relationship of the ACAS sXu algorithm. GE Research is working with Stanford Center for AI Safety to formally prove safe properties of the ACAS sXu DNN implementation. Such formal proofs would constitute one form of evidence in a safety assurance case to enable a waiver and eventually certification to operate a UAS using the ACAS sXu DNN. GE Research is also collaborating with GE Aviation Systems on the design and test of a runtime safety assurance (RTSA) architecture to prevent a UAS from violating a safety policy at runtime. The RTSA architecture follows the guidance of the ASTM-F3269-17 standard “Standard Practice for Methods to Safely Bound Flight Behavior of Unmanned Aircraft Systems Containing Complex Functions”. In GE’s RTSA architecture a safety function monitors the in-flight operation of a “complex function” such as an AI-based autopilot and takes control of the UAS away from the complex function if a safety policy (such as a geofence boundary) is violated. GE Research is collaborating with the Stanford Center for AI Safety on novel techniques to use Reinforcement Learning (RL) to learn an optimal policy for when the RTSA should switch to a safe recovery mode. The techniques described in this talk are expected to advance the state of the art in the safe operation of UAS that use AI-based algorithms in safety-critical functions.

Sriram Sankaranarayanan (University of Colorado at Boulder, USA)

Learning and Verifying Conformant Data-Driven Models of Dynamical Systems

Neural networks are increasingly used as data-driven models for a wide variety of physical systems such as ground vehicles, airplanes, human physiology and automobile engines. These models are in-turn used for designing and verifying autonomous systems. The advantages of using neural networks include the ability to capture characteristics of particular systems using the available data. This is particularly advantageous for medical systems, wherein the data collected from individuals can be used to design devices that are well-adapted to a particular individual's unique physiological characteristics. At the same time, neural network models remain opaque: their structure makes them hard to understand and interpret by human developers. One key challenge lies in checking that neural network models of processes are “conformant” to the well established scientific (physical, chemical and biological) laws that underlie these models. In this talk, we will show how conformance often fails in models that are otherwise accurate and trained using the best practices in machine learning, with potentially serious consequences. We motivate the need for learning and verifying key conformance properties in data-driven models of the human insulin-glucose system and data-driven automobile models. We survey verification approaches for neural networks that can hold the key to learning and verifying conformance.

Joint work with Souradeep Dutta, Monal Narasimhamurthy and Taisa Kushner

Iain Whiteside (Five AI)

AI Safety for Automated Driving

We first elaborate on some of the practical challenges for demonstrating safety of a perception system for Level 4 self-driving. We then outline a methodology and set of technologies that we believe can help us properly argue for the safety of AI components for an AV stack. We will exemplify the methodology with some real world examples from Five's technology.