The AAAI-18 Workshop on

Engineering Dependable and Secure

Machine Learning Systems

February 3, 2018

held at the AAAI Conference on Artificial Intelligence (AAAI-18)

This workshop solicits original contributions addressing problems and solutions related to dependability, quality assurance and security of ML systems.

Businesses and the society at large increasingly rely on machine learning solutions. Similarly to other software systems, ML systems must meet their requirements. Indeed, different system types may introduce different dependability and quality requirements. For instance, an autonomous car may require a different level of trust than an investment recommendation system.

However, meeting reliability, quality and security requirements in the context of ML requires new methodologies and tools:

  • Standard notions of software quality and reliability such as deterministic functional correctness, black box testing, code coverage or traditional software debugging become practically irrelevant for ML systems. This is due to their non-deterministic nature, re-use of high quality implementations of ML algorithms, and lack of understanding of the semantics of learned models, e.g., when deep learning methods are applied. This applies in particular to systems based on neural networks learned using deep learning methods. For example, self-driving car models may have been learned in a cold weather country. When such a car is deployed in a hot weather country, it will likely face dramatically different driving conditions that may render its models obsolete. This calls for novel methods to address quality and reliability challenges of ML systems.
  • Broad deployment of ML software in networked systems inevitably exposes the ML software to attacks. While classical security vulnerabilities are relevant, ML techniques have additional weaknesses, some already known (e.g., sensitivity to training data manipulation), and some yet to be discovered. Hence, there is a need for research as well as practical solutions to ML security problems.

In providing solutions for the above problems, the workshop aims to combine several disciplines, including ML, software engineering (with emphasis on quality), security, and algorithmic game theory. It also aims to promote a discourse between academia and industry in a quest for well-founded practical solutions.