Robustness of AI Systems Against Adversarial Attacks (RAISA3)
Virtual Zoom Webinar – August 29, 2020
Important Dates
Website Launched – January 16, 2020
Paper Submission Deadline – May 15, 2020 (midnight, Pacific time)
Author Notification – July 6, 2020
RAISA3 Workshop – August 29, 2020
ECAI 2020 – August 29–September 2, 2020
Workshop Description
The RAISA3 workshop will focus on the robustness of AI systems against adversarial attacks. While most research efforts in adversarial AI investigate attacks and defenses with respect to particular machine learning algorithms, our approach will be to explore the impact of adversarial AI at the system architecture level. In this workshop we will discuss threat-borne adversarial AI attacks that can impact an AI system at each of various processing stages, including: at the input stage of sensors and sources, at the data conditioning stage, during training and application of machine learning algorithms, at the human-machine teaming stage, and during application within the mission context. We will additionally discuss attacks against the supporting computing technologies.
The RAISA3 workshop is a one full day event and will include invited keynote speakers working in the research area, as well as a number of relevant presentations selected through a Call for Participation.
In general, adversarial AI attacks against AI systems take three forms: 1) data poisoning attacks inject incorrectly or maliciously labeled data points into training sets so that the algorithm learns the wrong mapping, 2) evasion attacks perturb correctly classified input samples just enough to cause errors in runtime classification, and 3) inversion attacks repeatedly test trained algorithms with edge-case inputs in order to reveal the previously hidden decision boundaries and training data. Protection against adversarial learning attacks include techniques which cleanse training sets of outliers in order to thwart data poisoning attempts, and methods which sacrifice up-front algorithm performance in order to be robust to evasion attacks. As AI capabilities become incorporated into facets of everyday life, the need to understand adversarial attacks and effects and relevant mitigation approaches for AI systems become of paramount importance.
Central to this methodology is the notion of threat modeling, which will support relevant discourse with respect to potential attacks and mitigations.
The workshop format is structured to encourage a lively exchange of ideas among researchers in AI working on developing techniques to mitigate adversarial attacks on end-to-end AI systems.
Workshop Link
Please click the link below to join the webinar:
https://mitll.zoomgov.com/j/1616328450?pwd=ZkdXYnVyYWk3ZnQwSzYyRzd6SDhBQT09
Passcode: 592107
Or iPhone one-tap :
US: +16692545252,,1616328450#,,,,,,0#,,592107#
or
+16468287666,,1616328450#,,,,,,0#,,592107#
Or Telephone:
Dial (for higher quality, dial a number based on your current location):
US: +1 669 254 5252 or +1 646 828 7666
Webinar ID: 161 632 8450
Passcode: 592107
International numbers available: https://mitll.zoomgov.com/u/ab8zhRWcb
Or an H.323/SIP room system:
H.323: 161.199.138.10 (US West) or 161.199.136.10 (US East)
Meeting ID: 161 632 8450
Passcode: 592107
SIP: 1616328450@sip.zoomgov.com
Passcode: 592107
The RAISA3 workshop is held in conjunction with ECAI 2020.
Santiago de Compostela, Spain
Workshop Theme
Identify, protect, detect, respond, and recover from adversarial attacks against AI systems
Robustness of AI Systems Against Adversarial Attacks (RAISA3)
August 29, 2020 – Santiago de Compostela, Spain
Keynotes
Khoury College of
Computer Sciences,
Northeastern University,
Boston MA
School of Electrical Engineering
and Computer Science,
Pennsylvania State University
Workshop Format
Invited speakers, presentations, panel and group discussions
Workshop Topics
AI threat modeling
Protection against attacks on end-to-end AI architecture:
Data conditioning stage
Adversarial machine learning
Human-machine teaming stage
Cyber attacks against AI hardware and/or software
Deployment stage
Explainable AI
System lifecycle attacks
System verification and validation
System performance metrics, benchmarks and standards
Protection and detection techniques against black-box, white-box, and gray-box adversarial attacks
Defenses against training attacks
Defenses against testing (inference) attacks
Response and recovery based on:
Confidence levels
Consequences of action
AI system confidentiality, integrity, and availability