The Journal of Software Testing, Verification & Reliability (STVR) invites authors to submit papers to a Special Issue on Dependable and Secure Machine Learning Systems.
Machine learning (ML) solutions are becoming ubiquitous. Similar to other systems, ML systems must meet quality requirements. However, ML systems may be non-deterministic; they may re-use (ideally) high quality implementations of ML algorithms; and, the semantics of models they produce may be incomprehensible. Consequently, standard notions of software quality and reliability such as deterministic functional correctness, black box testing, code coverage and traditional software debugging may become irrelevant for ML systems. This calls for novel methods and new methodologies and tools to address quality and reliability challenges of ML systems.
In addition, broad deployment of ML software in networked systems inevitably exposes the ML software to attacks. While classical security vulnerabilities are relevant, ML techniques have additional weaknesses, some already known (e.g., sensitivity to training data manipulation), and some yet to be discovered. Hence, there is a need for research as well as practical solutions to ML security problems.
This special issue focuses on all topics relevant to the testing/verification of Dependable and Secure Machine Learning Systems. In particular, the topics of interest include, but are not limited to:
Extended versions of archival conference papers should contain at least 30% new material and should explain clearly the additional contribution. Such papers should also have a different abstract, should cite the original conference paper, and should explain how this previous work has been extended.
Please submit your paper electronically using the Software Testing, Verification & Reliability manuscript submission site. Select "Special Issue Paper" and enter "Dependable and Secure Machine Learning Systems" as the title.