3rd International Workshop on Verification of Neural Networks (VNN20) and 1st International Verification of Neural Networks Competition (VNN-COMP'20)
News / Updates
Registration (free) for VNN is handled via the CAV registration, please register: http://i-cav.org/2020/attending/
Preliminary program for VNN is available: https://sites.google.com/view/vnn20/program
June 24, 2020: Given the virtual nature, the date of the workshop has been shifted to occur during the week on July 21 from the originally scheduled July 19 (which falls on the weekend). The anticipated time of the live/synchronous parts of the workshop will follow the CAV main program, in approximately the range noon-2pm US Pacific time. The invited and paper talks will be pre-recorded and made available prior to the workshop, with live/synchronous Q&A on July 21, as well as discussions on VNN-COMP/VNN-LIB.
May 10, 2020: CAV and VNN will be conducted virtually, details forthcoming
March 31, 2020: Call for papers is available: https://sites.google.com/view/vnn20/cfp
February 11, 2020: VNN-COMP initial call for participation is up, please email Changliu and Taylor by the end of February if you are interested to participate or have any suggestions, further details: https://sites.google.com/view/vnn20/vnncomp
The 2020 Workshop on Verification of Neural Networks (VNN20), to be held with the 32nd International Conference on Computer-Aided Verification (CAV 2020) over July 19-24 2020, in Los Angeles, USA, aims to bring together researchers interested in methods and tools providing guarantees about the behaviours of neural networks and systems built from them.
VNN20 will be held on July 21, 2020
July 19, 2020. A related event, the Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), will be held on July 20, 2020. VNN focuses more on neural network verification, whereas FoMLAS focuses on broader usage of machine learning components in autonomy.
Introduction and Background
Methods based on machine learning are increasingly being deployed for a wide range of problems, including recommender systems, machine vision, autonomous driving, and beyond. While machine learning has made significant contributions to such applications, concerns remain about the lack of methods and tools to provide formal guarantees about the behaviours of the resulting systems.
In particular, for data-driven methods to be usable in safety-critical applications, including autonomous systems, robotics, cybersecurity, and cyber-physical systems, it is essential that the behaviours generated by neural networks are well-understood and can be predicted at design time. In the case of systems that are learning at run-time it is desirable that any change to the underlying system respects a given safety-envelope for the system.
While the literature on verification of traditionally designed systems is wide and successful, there has been a lack of results and efforts in this area until recently. The symposium intends to bring together researchers working on a range of techniques for the verification of neural networks, ranging from formal methods to optimisation and testing. The key objectives include: presentation of recent work in the area; discussion of key difficulties; collecting community benchmarks; and fostering collaboration.
One challenge for this research this area is that results are being published in several research communities, including formal verification, security and privacy, systems, and AI.
The symposium workshop will include invited speakers, contributed papers, demonstrations, breakaway sessions, and panel sessions.
VNN-COMP Neural Network Verification Competition
We are currently organizing the details for the neural network verification competition. We anticipate a similar organization and process to the International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP), where a categorization based on system expressiveness and problem formulation exists. In the context of VNN, this could for instance be different problem categories for whether a network is feedforward or recurrent (RNNs). Within these broad categorizations, further categorization may exist based on whether verification approaches support only certain layers (e.g., many approaches allow ReLUs, but relatively fewer allow nonlinear activations such as tanh). If you are interested, please contact the organizers. We anticipate analysis on existing benchmarks and challenge problems, such as ACAS-Xu, MNIST, CIFAR-10, but are also open to new challenge problems and benchmarks.
More details can be found on the VNN-COMP subpage: https://sites.google.com/view/vnn20/vnncomp