Manuscript Submission April 30, 2021
Notification to Authors June 30, 2021
Revised Manuscript Due July 31, 2021
Decision Notification September 30, 2021
There has been growing interest in rectifying machine learning vulnerabilities and preserving privacy. Adversarial machine learning and privacy preserving has attracted tremendous attention over the past few years. The questions surrounding this space are more pressing and relevant than ever before: How can we make a system robust to novel or potentially adversarial inputs? How can machine learning systems detect and adapt to changes in the environment over time? When can we trust that a system that has performed well in the past will continue to do so in the future? These questions are essential to consider in designing systems for high stakes applications. We aim to bring together researchers in diverse areas to further the field of reliable and trustworthy machine learning. We focus on robustness, trustworthiness, privacy preservation, and scalability. Robustness refers to the ability to withstand the effects of adversaries, including adversarial examples and poisoning data, distributional shift, model misspecification, and corrupted data. Trustworthiness is guaranteed by transparency, explainability, and privacy preservation. Scalability refers to the ability to generalize to novel situations and objectives. This special issue aims to promote the most recent advances of secure AI from both the theoretical and empirical perspectives as well as novel applications. The goal is to build reliable machine learning and computational intelligence models. Topics of the special issue include, but are not limited to:
Machine learning reliability
Adversarial machine learning (attack and defense)
Privacy preserving machine learning
Learning over encrypted data
Homomorphic encryption techniques for machine learning
Secure multi-party computation techniques for machine learning
Explainable and transparent artificial intelligence
Neural architecture search for secure learning
Security intelligence in malware, network intrusion, web security, and authentication
All manuscripts are to be submitted through the Manuscript Center for Complex & Intelligent Systems. All manuscripts must be prepared according to the Springer Journal Submission Guidelines. All manuscripts will be reviewed following the standard Springer Journal review process.
Dr. Catherine Huang (Managing Guest Editor), McAfee LLC, USA, catherine_huang@mcafee.com
Prof. Yew-soon Ong, Nanyang Technological University, Singapore, asysong@ntu.edu.sg
Dr. Celeste Fralick, McAfee LLC, USA, celeste_fralick@mcafee.com