6th International Verification of Neural Networks Competition (VNN-COMP'25)
Sponsorship
VNN-COMP'24 was sponsored by CEA-List. We are open to sponsors for VNN-COMP'25.
News / Updates
2025-03-03: draft rules are available here, please add comments if you have any suggestions or post in the github issues
Draft rules document: https://docs.google.com/document/d/1nvkThDbSlWfu_s23vxslI-OJLFYwsShpQwf3ZCYqU48/edit?usp=sharing
Github rules issue: https://github.com/verivital/vnncomp2025/issues/1
Competition Contribution papers from participating tools: participating tools have the option of submitting a short paper (4-6 pages) describing their tool and competition strategy, inspired by SV-COMP competition contribution papers, see e.g. here for a representative outline of what such a paper looks like: https://sv-comp.sosy-lab.org/2025/submission.php
Here are some exemplar competition contribution papers: https://scholar.google.com/scholar?q=%22competition+contribution%22+tacas&hl=en&as_sdt=0%2C43&as_ylo=2023
2025-02-08: call for participation and benchmarks is open, registration form is here, please register by March 24, we plan the rules meeting April 3 at 9am central (on Zoom, link will be sent to those registered): https://forms.gle/73samugGv91woo169
2024-10-03: VNN-COMP'25 is again co-located with CAV for 2025, with the next iteration of a new conference dedicated to AI verification co-located at CAV this year, the Symposium on AI Verification (SAIV'25); https://www.aiverification.org/
We are planning several changes for VNN-COMP'25, including (1) creating a track for benchmark papers with publications (see the SAIV call: https://www.aiverification.org/2025/call/ ), (2) Competition Contribution report publications for tool participants (similar to SV-COMP), and (3) adding organizers (please contact Taylor if interested to get involved)
Useful tools / frameworks
Prior iteration benchmarks and references:
David Shriver provides a Python framework to help with parsing VNN-LIB here: https://github.com/dlshriver/vnnlib
CoCoNet is also useful for VNN-LIB parsing and network interchange: https://github.com/NeVerTools/CoCoNet
Christopher Brix provides the execution framework here (for submitting tools, benchmarks, etc.): https://vnncomp.christopher-brix.de/
Prior reports for reference
Summary of 2020-2022: https://arxiv.org/abs/2301.05815 and https://link.springer.com/article/10.1007/s10009-023-00703-4
2020: Overleaf only: https://www.overleaf.com/read/rbcfnbyhymmy
VNN-LIB
General Information
The 6th International Verification of Neural Networks Competition (VNN-COMP'25), to be held with the 8th International Symposium on AI Verification (SAIV'25) with the 37th International Conference on Computer Aided Verification (CAV'25) over July 21-25, 2025, in Zagreb, Croatia, aims to bring together researchers interested in formal methods and tools providing guarantees about the behaviors of neural networks and systems built from them.
Introduction and Background
Methods based on machine learning are increasingly being deployed for a wide range of problems, including recommender systems, machine vision, autonomous driving, and beyond. While machine learning has made significant contributions to such applications, concerns remain about the lack of methods and tools to provide formal guarantees about the behaviours of the resulting systems.
In particular, for data-driven methods to be usable in safety-critical applications, including autonomous systems, robotics, cybersecurity, and cyber-physical systems, it is essential that the behaviours generated by neural networks are well-understood and can be predicted at design time. In the case of systems that are learning at run-time it is desirable that any change to the underlying system respects a given safety-envelope for the system.
While the literature on verification of traditionally designed systems is wide and successful, there has been a lack of results and efforts in this area until recently. The competition intends to bring together researchers working on techniques for the verification of neural networks. We anticipate a similar organization and process to the International Competition on Verifying Continuous and Hybrid Systems (ARCH-COMP), where a categorization based on system expressiveness and problem formulation exists. In the context of VNN, this could for instance be different problem categories for whether a network is feedforward or recurrent (RNNs). Within these broad categorizations, further categorization may exist based on whether verification approaches support only certain layers (e.g., many approaches allow ReLUs, but relatively fewer allow nonlinear activations such as tanh). If you are interested, please contact the organizers. We anticipate analysis on existing benchmarks and challenge problems, such as ACAS-Xu, MNIST, CIFAR-10, but are also open to new challenge problems and benchmarks. We will follow similar procedures as in the 5th VNN-COMP.
Organizers
Stanley Bak, Stony Brook University [email]
Changliu Liu, Carnegie Mellon University [email]
Taylor T. Johnson, Vanderbilt University [email]
Christopher Brix, RWTH Aachen University [email]
Haoze (Andrew) Wu, Stanford University [email]
Important Dates
Intention to participate: March 24, 2025 (you may continue to join after this date, but please also email the organizers if you submit the form after this date so we can add you to listserv to get email updates, etc.): registration form https://forms.gle/73samugGv91woo169
Rules meeting (Zoom): April 3, 2025, 9am US central time: Zoom info will be sent by email to everyone who submitted registration forms
Finalization of the rules: May 1, 2025
Initial submission/proposals of all benchmarks: May 8, 2025
Deadline for submission of competition contribution papers: May 16, 2025
Notification for competition contribution papers: May 23, 2025
Camera ready deadline for competition contribution papers: May 30, 2025
Voting for selecting benchmarks in regular and extended tracks: June 2, 2025 AOE
Participants finalize tool scripts and organizers begin running tools: June 30, 2025 AOE
SAIV with presentation of VNN-COMP results and report: July 21-22, 2025
How To Participate / Registration
Given the infancy of the area, this friendly competition is a mechanism to share and standardize relevant benchmarks to enable easier progress within the domain, as well as to understand better on what methods are most effective for which problems along with current limitations. Subject to sponsorship, we may offer a "best competition results" award with a process that will involve community feedback for selection.
Register by March 24, 2025 (you may continue to join after this date, but please register sooner rather than later), if you are interested to learn more, participate, and help guide the direction of the competition, please use this Google form: https://forms.gle/73samugGv91woo169
We welcome participation from all, and particularly have considered the following possible participants:
Verification / Robustness Tool Developers: you have a tool/method for proving properties of neural networks
Benchmark / Challenge Problem Proposers: you have neural networks and properties you would like to check for them, and can publicly share both
Sponsors and Others: you are interested in the area, but do not want to participate with a tool or benchmark/challenge problem, and/or would be interested in sponsoring a "best competition results" award
Participation will be done remotely in advance of SAIV, with summary results presented at SAIV. Attendance at SAIV by VNN-COMP participants will NOT be required, but of course you would be welcome to attend. Coordination mechanisms are open to discussion, but likely would be facilitated via, e.g., Git repositories, Slack, and/or forums such as Google Groups.
The mechanisms of the competition are community driven, along the lines of prior related competitions for hybrid systems verification (ARCH-COMP, https://cps-vo.org/group/ARCH/FriendlyCompetition ) and software verification (SV-COMP, https://sv-comp.sosy-lab.org/ ), and we welcome any suggestions for organization. Current plans anticipate some subdivisions of the competition into categories, such as by what layers/activations different tools and methods allow, whether they perform exact (sound and complete) or over-approximative (sound but incomplete) analysis, or training/synthesis vs. verification.
Anticipated benchmarks include those used in prior VNN-COMP iterations, such as ACAS-Xu, MNIST classifiers, CIFAR classifiers, etc., with various parameterizations (initial states, specifications, robustness bounds, etc.), and the mechanism for selection will be community driven, similar to the benchmark jury selection in related competitions. Participants are welcome to propose other benchmarks.
Some goals of this initiative are to lead to greater standardization of benchmarks, model formats (ONNX, etc.), etc., which has helped led to advances in other areas (e.g., SMT-LIB, http://smtlib.cs.uiowa.edu/ ), such as by making progress on the VNN-LIB initiative (http://www.vnnlib.org/ ), as well as to get some scientific sense on what the current landscape of methods and their applicability to different problems. Eventually, we hope the initiative will lead to better comparisons among methods. Depending on levels of participation, categories, etc., the outcome of the competition may be a series of competition reports and a repeatability evaluation.
Thank you for your consideration, and we hope you will participate. Please let us know of any questions, if you would like to discuss before making a decision, or any suggestions you may have for the organization of this initiative, as we believe it will be most successful if driven actively by the community.
Previous Workshops/Symposia/Competition
5th International Verification of Neural Networks Competition (VNN-COMP'24), Part of CAV'24 with SAIV'24
4th International Verification of Neural Networks Competition (VNN-COMP'23), Part of CAV'23 with FoMLAS'23
3rd International Verification of Neural Networks Competition (VNN-COMP'22), Part of CAV'22 with FoMLAS'22
2nd International Verification of Neural Networks Competition (VNN-COMP'21), Part of CAV'21
3rd International Workshop on Verification of Neural Networks (VNN20) and 1st International Verification of Neural Networks Competition (VNN-COMP'20), Part of CAV'20
AAAI Spring Symposium on Verification of Neural Networks (VNN19)
Sponsor
Please contact the organizers if you are interested to sponsor.