AI meets Software Quality:

towards safer intelligent systems

&

intelligent ways to make systems safer


An international colloquium


Scope

Nowadays, AI solutions are widely deployed. Like other systems, AI systems must meet quality requirements. However, AI systems may be non-deterministic; they may re-use powerful implementations of AI algorithms; and, the semantics of solutions they produce may be incomprehensible. Consequently, standard notions of software quality and reliability such as deterministic functional correctness, code coverage, and traditional software debugging may become practically irrelevant for AI systems. This calls for novel methods and tools to address the quality and reliability challenges of AI systems. In addition, the broad deployment of AI software in networked systems inevitably exposes AI software to attacks. While classical security vulnerabilities are relevant, AI techniques have additional weaknesses, some already known (e.g., sensitivity to training data manipulation), and some yet to be discovered. Hence, there is a need for research and practical solutions to AI security problems.

Apart from the reliability of AI systems, recent research in AI offers many tools that can cater to better testing and analysis of software systems. For example, the challenge of deciding what and when to test that software engineers often tackle, can greatly benefit from the new capabilities of generalizing from examples that current AI technologies provide. Such capabilities are not limited to testing. Multiple methods for assuring software quality can be improved using AI, thus decreasing the efforts required for quality assurance. AI improvement can be applied to increase the safety of programming languages and libraries, to improve analysis techniques, for fault localization and weakness identification, and more. AI can also be used to produce improved reports that in turn should facilitate better assessments of software readiness. Thus, it is necessary to study both practical solutions and tools and theories that enable and support them.

This colloquium focuses on research problems and solutions related to dependability, quality assurance, and security of software systems and AI-based systems. The colloquium addresses several disciplines, including AI and ML, software engineering (with emphasis on quality), security, and game theory. It further encourages both academic and industrial studies in a quest for well-founded practical solutions.

The colloquium aims to bring together researchers in academia and industry that are interested in the quality of AI systems and the application of AI in the testing of traditional software. It is designed to facilitate discussion of early research and share pain points and challenges encountered in practice.