The AAAI-19 Workshop on

Engineering Dependable and Secure

Machine Learning Systems


January 28 2019


held at the AAAI Conference on Artificial Intelligence (AAAI-19)

Modern society increasingly relies on machine learning (ML) solutions. Like other systems, ML systems must meet their requirements. Standard notions of software quality and reliability such as deterministic functional correctness, black box testing, code coverage or traditional software debugging become practically irrelevant for ML systems. This is due to the nondeterministic nature of ML systems, reuse of high quality implementations of ML algorithms, and lack of understanding of the semantics of learned models, for example, when deep learning methods are applied.

For example, self-driving car models may have been learned in a cold weather country. When such a car is deployed in a hot weather country, it will likely face dramatically different driving conditions that may render its models obsolete. This calls for novel methods and new methodologies and tools to address quality and reliability challenges of ML systems.

Furthermore, broad deployment of ML software in networked systems inevitably exposes the ML software to attacks. While classical security vulnerabilities are relevant, ML techniques have additional weaknesses, some already known (for example, sensitivity to training data manipulation), and some yet to be discovered. Hence, there is a need for research as well as practical solutions to ML security problems.

With these in mind, this workshop solicits original contributions addressing problems and solutions related to dependability, quality assurance and security of ML systems. We are also delighted to include in here the call for papers for a STVR special issue that accompanies this workshop.

The workshop combines several disciplines, including ML, software engineering (with emphasis on quality), security, and algorithmic game theory. It further combines academia and industry in a quest for well-founded practical solutions.

Topics

Topics of interest include, but are not limited, to the following:

  • Software engineering aspects of ML systems and quality implications
  • Testing and debugging of ML systems
  • Quality implication of ML algorithms on large-scale software systems
  • Case studies of successful and unsuccessful applications of ML techniques
  • Correctness of data abstraction, data trust
    • ML techniques to meet security and quality
    • Size of the training data, implied guaranties
    • Application of classical statistics to ML systems quality
  • Sensitivity to data distribution diversity and distribution drift
  • The effect of labeling costs on solution quality (semi-supervised learning)
  • Reliable transfer learning
  • Vulnerability, sensitivity and attacks against ML
  • Adversarial ML and adversary based learning models
  • Strategy-proof ML algorithms