NNV

Grant

  • The DARPA Assured Autonomy from 2018 - 2023

  • Toyota Research, 2021-now

  • The Nebraska EPSCoR First Award, 2021-2022

Motivation

Deep Neural Networks (DNNs) have become a popular choice for many complex applications such as image classification, natural language processing, and object recognition. Recently, DNNs have been used in safety-critical autonomous CPS such as autonomous driving vehicles and the new generation of the aircraft collision avoidance systems (ACAS Xu). Despite the advantages of DNNs on learning and solving complicated problems, it has been shown that well-trained DNNs are vulnerable to adversarial attacks, i.e., slightly disturbing the input may lead to a vast difference in the result of a DNN. Since DNNs may behave incorrectly and unsafely, the integration of DNNs-based learning-enabled components into safety-critical autonomous CPS may pose significant financial and life-threatening risks. Therefore, there is an urgent need for advanced and scalable methods that can efficiently verify DNNs and DNNs-based learning-enabled CPS.

Objectives & Publications

Tool

NNV: (Neural Network Verification) written in Matlab + Python. Github Link: https://github.com/transafeailab/nnv, https://github.com/verivital/nnv

Main developer: Dung Tran

Approach

Our approach is based on reachability analysis in which we compute the reachable sets of a neural network-based system given an initial input set. Since the reachable sets contain all possible states of the system, they are used to verify if the system satisfies a safety/robustness property.

HighLights

  • NNV supports both exact and over-approximate reachability for verification of neural networks using star sets

  • NNV is the first approach that can compute and visualize the exact reachable set of ACAS Xu networks

  • NNV performs bread-first-search verification, a more efficient algorithm is the depth-first-search (DFS) verification that can be found in NNENUM

  • NNV solves both "open-loop", i.e., neural network verification and "closed-loop", i.e., neural network control system verification, problems

  • NNV performance can be improved on multicores machine by parallel computing

  • NNV also implements the zonotope (DeepZ) and new abstract domain methods (DeepPoly) for easily comparison

  • We will keep developing and maintaining NNV and use it mainly for teaching and design-time verification

  • We are developing a new framework for runtime monitoring and verification of distributed learning-enabled CPS (DLe-CPS) called StarV

Applications

Safety Verification of ACAS Xu Networks

An exact reachable set may contain thousands of polyhedra

Conservativeness of NNV-approximate star, Zonotope (DeepZ) and Abstract-Domain (DeepPoly) methods

DeepZ may be very conservative

NNV is the only approach that can construct a "complete" set of counter examples

Verification performance improves with parallel computing (N is number of cores)

Comparison of different tools on 45 ACASXu networks with safety property P3, NNV exact-star is 4x Reluplex, 374x Marabou, 15x Marabou-DNC. NNV approx-star is 547x Reluplex, 374x Marabou, 2152x Marabou-DNC. NNV approx-star verifies 29/45 networks, DeepZ verifies 2/45 networks, DeepPoly verifies 10/45 networks.

Safety Verification of an Advanced Emergency Braking System

Safety Requirement: Time-To-Collission (TTC) > Full Braking Time, i.e., at anytime, we have enough time to apply full brake without collision

Reachable sets of Time-To-Collision, the system is safe

Star-set method can neglect/reduce significantly the conservativeness in reachable set computation


Provably Safe Initial Conditions

Safety Verification of Advanced Adaptive Cruise Control System

Safety requirement: d = x_lead - x_ego > d_safe = 10 + 1.4 v_ego

There is the case that d < d_safe, the system is unsafe

d is always larger than d_safe, the system is safe

Robustness Verification of VGG16 under Adversarial Attacks

  • 16 layers, 138M parameters

  • Classify images into 1000 classes

  • Top 5 accurate classifier

Is VGG16 robust for a <= 2x10^(-7) ?

YES! The network is robust if a <= 2 x 10^(-7)

Output ranges of a MNIST network obtained by ImageStar, DeepZ and DeepPoly

Robustness Verification of MNIST and M2NIST Semantic Segmentation Networks

  • A pixel is robust under an attack if its classified label is unchanged

  • What is the percentage of robust pixels of a network under an attack?

  • What is the sensitivity of a network under an attack?

  • What is the robust IoU (Intersection-over-Union) of a network under an attack?

Segmentation reach set covers all possible classified labels of all pixels under an attack.

  • Verification of MNIST segmentation networks

  • Depending on the size of the attack and the number of attacked pixels, the number of misclassified pixels and unknown pixels may vary.