IEEE CIS Neural Networks Technical Committee

 Task Force on Secure Learning 

Aim and Scope

The goal is to build reliable machine learning (ML) models, which are resilient in adversarial settings.

There has been growing interest in rectifying machine learning vulnerabilities and preserve privacy. Adversarial machine learning and privacy-preserving has attracted tremendous attention in the machine learning society over the past few years.  Recent research has studied the vulnerability of machine learning ML algorithms and various defense mechanisms against those vulnerabilities. The questions surrounding this space are more pressing and relevant than ever before: How can we make a system robust to novel or potentially adversarial inputs? How can machine learning systems detect and adapt to changes in the environment over time? When can we trust that a system that has performed well in the past will continue to do so in the future? These questions are essential to consider in designing systems for high stakes applications such as self-driving cars and automated surgical assistants.            

We aim to bring together researchers in diverse areas such as reinforcement learning, human-robot interaction, game theory, cognitive science, and security to further the field of reliable and trustworthy machine learning. We will focus on robustness, trustworthiness, privacy preservation, and scalability. Robustness refers to the ability to withstand the effects of adversaries, including adversarial examples and poisoning data, distributional shift, model misspecification, corrupted data. Trustworthiness is guaranteed by transparency, explainability, and privacy preservation. Scalability refers to the ability to generalize to novel situations and objectives. 

This TF aims to promote the most recent advances of secure machine learning from both the theoretical and empirical perspectives and the novel applications.  

Task Force Chair:

Task Force Vice-Chairs:

Task Force Members:

Planned Activities in 2022/2023:


Completed Activities in 2022:

Completed Activities in 2020:

IEEE Transactions on Artificial Intelligence 

Special Issue on Security and Privacy in Machine Learning