Program - 2020 Verification of Neural Networks Workshop

July 21, 2020

The workshop was originally scheduled for July 19, but given the transition to a virtual format, has been moved to July 21 following the CAV main program. The format will be a "flipped classroom" style, with pre-recorded presentations from invited speakers and paper presenters that should be watched asynchronously prior to the workshop, with live synchronous Q/A in the range of noon to 2pm US Pacific time (the same time zone PDT/Los Angeles as the main CAV program). Slides, paper, and videos are also shown below.

Timings: noon to 2pm US Pacific time

YouTube Live Stream: https://youtu.be/OMJx3Lru0tI

YouTube Playlist of Pre-Recorded Presentations to Watch before the Live/Synchronous Q&A in the Workshop: https://www.youtube.com/playlist?list=PLMPy362FkW9opHkdunPe9nWkHqSmU5hHd

Schedule

  • noon - 12:10pm: VNN introduction, Profs. Changliu Liu and Taylor Johnson

  • 12:10pm - 12:30pm: Q&A for Invited Keynote Abstractions for Scalable Verification of AI-Controlled Cyber-Physical Systems (CPS), Prof. Pavithra Prabhakar

  • 12:30pm - 12:50pm: Q&A for Invited Keynote End-to-end robustness for Sensing-Reasoning pipeline in adversarial environments, Prof. Bo Li

  • 12:50pm - 1:30pm: Contributed Paper Q&A

    • 12:50pm - 1:00pm: Eleonora Pippia, Thao Dang and Alberto Policriti. Image Approximation for Feed Forward Neural Nets

    • 1:00pm - 1:10pm: Stanley Bak. Execution-Guided Overapproximation (EGO) for Improving Scalability of Neural Network Verification

    • 1:10pm - 1:20pm: Divyansh Pareek, Saket Dingliwal and Jatin Arora. Property Inference in ReLU nets using Linear Interpolants

    • 1:20pm - 1:30pm: Diego Manzanas Lopez, Patrick Musau and Taylor T Johnson. On the Effectiveness of L1-Norm Based Channel Pruning for Convolutional Neural Network Verification

  • 1:30pm - 1:55pm: VNN-COMP Report, Discussion, and Future Plans

  • 1:55pm - 2:00pm: VNN Conclusion and Future Plans

Invited Keynotes

Invited Keynote: Abstractions for Scalable Verification of AI-Controlled Cyber-Physical Systems (CPS), Prof. Pavithra Prabhakar, Kansas State University: http://people.cs.ksu.edu/~pprabhakar/

Abstract: Traditional feedback controllers in cyber-physical systems (CPS) are being increasingly replaced by learning based components such as artificial neural networks (NN). Given the safety criticality of AI-controlled CPS such as autonomous vehicles, rigorous analysis techniques such as formal verification have become imperative. While there is a rich body of work on formal verification of traditional CPS in the realm of hybrid systems, the area of rigorous testing and analysis of CPS controlled by AI components such as neural networks is still in its nascent stages.

One of the main challenges with formal verification of neural network controlled CPS is the scalability of the analysis methods to large networks and complex dynamics. We present a novel abstraction technique for neural network size reduction that provides soundness guarantees for safety analysis, and indicates a promising direction for scalable analysis of of the closed loop system. In particular, our abstraction consists of constructing a simpler neural network with fewer neurons, albeit with interval weights called interval neural network (INN), which over-approximates the output range of the given neural network. We reduce the output range analysis problem on the INNs to solving a mixed integer linear programming problem. Our experimental results highlight the trade-off between the computation time and the precision of the computed output range.

Bio: Dr. Pavithra Prabhakar is an associate professor of computer science at the Kansas State University, where she holds the Peggy and Gary Edwards Chair in Engineering. She obtained her doctorate in Computer Science from the University of Illinois at Urbana-Champaign (UIUC), and was a CMI (Center for Mathematics of Information) fellow at Caltech. She has previously held a faculty position at the IMDEA Software Institute, in Madrid, Spain. Her main research interest is in the Formal Analysis of Cyber-Physical Systems, with emphasis on both theoretical and practical methods for verification and synthesis of hybrid control systems. She is the recipient of a Marie Curie Career Integration Grant from the European Union, an NSF CAREER Award and an ONR Young Investigator Award.

Invited Keynote: End-to-end robustness for Sensing-Reasoning pipeline in adversarial environments, Prof. Bo Li, University of Illinois Urbana-Champaign: http://boli.cs.illinois.edu/

Abstract: Advances in machine learning have led to rapid and widespread deployment of software-based inference and decision making, resulting in various applications such as data analytics, autonomous systems, and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Much previous work focuses on the robustness of independent ML and ensemble models, and can only certify a small magnitude of adversarial perturbation. In this talk, I will discuss a different viewpoint and improve learning robustness by going beyond independent ML and ensemble models. Aiming at promoting the generic Sensing-Reasoning machine learning pipeline which contains both the sensing (e.g. deep neural networks) and reasoning (e.g. Markov logic networks (MLN)) components enriched with domain knowledge, I plan to show how domain knowledge help improve the learning robustness and how we can formally certify the end-to-end robustness of such a ML pipeline. I will theoretically analyze the computational complexity of checking the provable robustness in the reasoning component, and then derive the provable robustness bound for several concrete reasoning components. I will also show that in some real-world experiments on large scale datasets such as Imagenet, it is possible to certify the robustness for Sensing-Reasoning ML pipelines.

Bio: Dr. Bo Li is an assistant professor in the department of Computer Science at University of Illinois at Urbana–Champaign, and is a recipient of the Symantec Research Labs Fellowship, Rising Stars, MIT TR 35, and best paper awards in several machine learning and security conferences. Previously she was a postdoctoral researcher in UC Berkeley. Her research focuses on both theoretical and practical aspects of security, machine learning, privacy, game theory, and adversarial machine learning. She has designed several robust learning algorithms, a scalable framework for achieving robustness for a range of learning methods, and a privacy preserving data publishing system. Her work have been featured by major publications and media outlets such as Nature, Wired, Fortune, and IEEE Spectrum.

VNN Invited Keynotes

Pavithra Prabhakar. Abstractions for Scalable Verification of AI-Controlled Cyber-Physical Systems (CPS)

Video: https://youtu.be/YT2PAyg7lMw

prabhakar2020vnn_slides.pdf

Bo Li. End-to-end robustness for Sensing-Reasoning pipeline in adversarial environments

Video: https://youtu.be/3twXysCE9Js

li2020vnn_slides.pdf

VNN Paper Presentations

Eleonora Pippia, Thao Dang and Alberto Policriti. Image Approximation for Feed Forward Neural Nets

Video: https://youtu.be/ra2_toZalIg

Paper PDF: https://drive.google.com/file/d/138EDAhNgkPE_yuFgbFnmykukzOCsw0EM/view?usp=sharing

pippia2020vnn_slides.pdf

Stanley Bak. Execution-Guided Overapproximation (EGO) for Improving Scalability of Neural Network Verification

bak2020vnn_slides.pdf

Divyansh Pareek, Saket Dingliwal and Jatin Arora. Property Inference in ReLU nets using Linear Interpolants

Video: https://youtu.be/fYRd7iJqgjU

Paper PDF: https://drive.google.com/file/d/1LOqYgVJiGr6eMKI67m5ZQsKHPL1pSdPt/view?usp=sharing

pareek2020vnn_slides.pdf

Diego Manzanas Lopez, Patrick Musau and Taylor T Johnson. On the Effectiveness of L1-Norm Based Channel Pruning for Convolutional Neural Network Verification

Video: https://youtu.be/Y829slsqa4Y

Paper PDF: https://drive.google.com/file/d/1qn0MEK_7l-rqqw4H3Mzl-QN3OCrb5BQW/view?usp=sharing

lopez2020vnn_slides.pdf