The goal is to build reliable machine learning (ML) models, which are resilient in adversarial settings.
There has been growing interest in rectifying machine learning vulnerabilities and preserve privacy. Adversarial machine learning and privacy-preserving has attracted tremendous attention in the machine learning society over the past few years. Recent research has studied the vulnerability of machine learning ML algorithms and various defense mechanisms against those vulnerabilities. The questions surrounding this space are more pressing and relevant than ever before: How can we make a system robust to novel or potentially adversarial inputs? How can machine learning systems detect and adapt to changes in the environment over time? When can we trust that a system that has performed well in the past will continue to do so in the future? These questions are essential to consider in designing systems for high stakes applications such as self-driving cars and automated surgical assistants.
We aim to bring together researchers in diverse areas such as reinforcement learning, human-robot interaction, game theory, cognitive science, and security to further the field of reliable and trustworthy machine learning. We will focus on robustness, trustworthiness, privacy preservation, and scalability. Robustness refers to the ability to withstand the effects of adversaries, including adversarial examples and poisoning data, distributional shift, model misspecification, corrupted data. Trustworthiness is guaranteed by transparency, explainability, and privacy preservation. Scalability refers to the ability to generalize to novel situations and objectives.
This TF aims to promote the most recent advances of secure machine learning from both the theoretical and empirical perspectives and the novel applications.
Catherine Huang, Google, USA (catherinehuanglei@gmail.com)
Sherin Mathews, US Bank, USA (s.mathews217@gmail.com)
Owen Vallis, OpenAI, USA (owensvallis@gmail.com)
Leo Zhang, Griffith University, Australia (leo.zhang@griffith.edu.au)
Huiyu (Joe) Zhou : University of Leicester (hz143@leicester.ac.uk )
Wenjian Luo: Harbin Institute of Technology, Shenzhen, China (luowenjian@hit.edu.cn)
Tayo Obafemi-Ajayi: Missouri State University, USA (tayoobafemiajayi@missouristate.edu)
Yaochu Jin: University of Surrey, UK (yaochu.jin@surrey.ac.uk)
Dipankar Dasgupta: University of Memphis, USA ( dasgupta@memphis.edu)
Yew Soon Ong: Nanyang Technological University, Singapore (ASYSOng@ntu.edu.sg)
Xinghua Qu: Tiktok/Bytedance AI lab, Singapore (xinghua.qu@bytedance.com)
Xiao Huang: HSBC, UK (Huang.xiao@hsbc.com)
Celeste Fralick: McAfee LLC, USA (celeste_fralick@mcafee.com)
Samuel Mulder: Sandia National Labs, USA ( samulde@sandia.gov)
Serena Zhang: Facebook, USA (serenazhang@fb.com )
Michael Enright: Quantum Dimension Inc., USA (menright@qdimension.com)
Simon See: NVIDIA AI Technology Centre (ssee@nvidia.com)
Tsungming (Nick) Tai: NVIDIA AI Technology Centre (ntai@nvidia.com)
Charles Cheung: NVIDIA AI Technology Centre (chcheung@nvidia.com)
Lei Ma: University of Alberta, Canada (ma.lei@acm.org)
[May-25] Special Issue: Launched a special issue on “Robust and Secure AI Systems” in Applied Soft Computing, inviting cutting-edge research on adversarial robustness and security (Wenjian Luo, 2025).
[May-25] Conference Leadership: Organized and participated in special sessions on Ethical AI at IJCNN 2025, and served as a panelist on Responsible AI for CAI 2025 (Catherine Huang, Tayo Obafemi-Ajayi).
[May-25] Keynote Address: Delivered a keynote on advances in secure machine learning at ICIPAI 2025, Changchun, China (Huiyu Zhou).