The AAAI-22 Workshop on
Engineering Dependable and Secure
Machine Learning Systems
March 1, 2022
held at the thirty-sixth AAAI Conference on Artificial Intelligence (AAAI-22)
Nowadays, machine learning solutions are widely deployed. Like other systems, ML systems must meet quality requirements. However, ML systems may be non-deterministic; they may re-use high-quality implementations of ML algorithms; and, the semantics of models they produce may be incomprehensible. Consequently, standard notions of software quality and reliability such as deterministic functional correctness, black box testing, code coverage, and traditional software debugging become practically irrelevant for ML systems. This calls for novel methods and new methodologies and tools to address quality and reliability challenges of ML systems.
In addition, broad deployment of ML software in networked systems inevitably exposes the ML software to attacks. While classical security vulnerabilities are relevant, ML techniques have additional weaknesses, some already known (e.g., sensitivity to training data manipulation), and some yet to be discovered. Hence, there is a need for research as well as practical solutions to ML security problems.
With these in mind, this workshop solicits original contributions addressing problems and solutions related to dependability, quality assurance and security of ML systems. The workshop combines several disciplines, including ML, software engineering (with emphasis on quality), security, and game theory. It further combines academia and industry in a quest for well-founded practical solutions. Topics of interest include, but are not limited to:
Vulnerability, sensitivity and attacks against ML
Adversarial ML and adversary-based learning models
Strategy-proof ML algorithms
Case studies of successful and unsuccessful applications of ML techniques
Correctness of data abstraction, data trust
Choice of ML techniques to meet security and quality
Size of the training data, implied guaranties
Application of classical statistics to ML systems quality
Sensitivity to data distribution diversity and distribution drift
The effect of labeling costs on solution quality (semi-supervised learning)
Reliable transfer learning
Software engineering aspects of ML systems and quality implications
Testing of the quality of ML systems over time
Debugging of ML systems
Quality implication of ML algorithms on large-scale software systems