Robustness of AI Systems Against Adversarial Attacks (RAISA3)
June 8, 2020 – Santiago de Compostela, Spain
Workshop Description
The RAISA3 workshop will focus on the robustness of AI systems against adversarial attacks. While most research efforts in adversarial AI investigate attacks and defenses with respect to particular machine learning algorithms, our approach will be to explore the impact of adversarial AI at the system architecture level. In this workshop we will discuss threat-borne adversarial AI attacks that can impact an AI system at each of various processing stages, including: at the input stage of sensors and sources, at the data conditioning stage, during training and application of machine learning algorithms, at the human-machine teaming stage, and during application within the mission context. We will additionally discuss attacks against the supporting computing technologies.
The RAISA3 workshop is a one full day event and will include invited keynote speakers working in the research area, as well as a number of relevant presentations selected through a Call for Participation.
In general, adversarial AI attacks against AI systems take three forms: 1) data poisoning attacks inject incorrectly or maliciously labeled data points into training sets so that the algorithm learns the wrong mapping, 2) evasion attacks perturb correctly classified input samples just enough to cause errors in runtime classification, and 3) inversion attacks repeatedly test trained algorithms with edge-case inputs in order to reveal the previously hidden decision boundaries and training data. Protection against adversarial learning attacks include techniques which cleanse training sets of outliers in order to thwart data poisoning attempts, and methods which sacrifice up-front algorithm performance in order to be robust to evasion attacks. As AI capabilities become incorporated into facets of everyday life, the need to understand adversarial attacks and effects and relevant mitigation approaches for AI systems become of paramount importance.
Central to this methodology is the notion of threat modeling, which will support relevant discourse with respect to potential attacks and mitigations.
The workshop format is structured to encourage a lively exchange of ideas among researchers in AI working on developing techniques to mitigate adversarial attacks on end-to-end AI systems.
Important Dates
- Website Launched – January 16, 2020
- Paper Submission Deadline – March 16, 2020 (midnight, Pacific time)
- Author Notification – April 3, 2020
- RAISA3 Workshop – June 8, 2020
- ECAI 2020 – June 8–12, 2020
Workshop Theme
Identify, protect, detect, respond, and recover from adversarial attacks against AI systems
Keynote
Khoury College of Computer Sciences, Northeastern University, Boston MA
Keynote
Research Scientist, Google Brain
Assistant Professor, University of Toronto (Fall 2019)
Workshop Format
Invited speakers, presentations, panel and group discussions
Workshop Topics
- AI threat modeling
- Protection against attacks on end-to-end AI architecture:
- Data conditioning stage
- Adversarial machine learning
- Human-machine teaming stage
- Cyber attacks against AI hardware and/or software
- Deployment stage
- Explainable AI
- System lifecycle attacks
- System verification and validation
- System performance metrics, benchmarks and standards
- Protection and detection techniques against black-box, white-box, and gray-box adversarial attacks
- Defenses against training attacks
- Defenses against testing (inference) attacks
- Response and recovery based on:
- Confidence levels
- Consequences of action
- AI system confidentiality, integrity, and availability