Advances in Model Based Testing (A-MOST 2014)

A-MOST 2014 Website

The increasing complexity of software results in new challenges for testing. Model Based Testing (MBT) continues to be an important research area, where new approaches, methods and tools make MBT techniques more deployable and useful for industry than ever. Models and different abstractions can ease comprehension of a complex system and ease test generation and automation. The use of models for designing and testing software is currently one of the most salient industrial trends with significant impact on the development and testing processes. Model-based tools and methods from have been successfully applied and continue to converge into comprehensive approaches to software and system engineering. The area encompass models derived from object-oriented software engineering, formal methods, and other mathematical and engineering disciplines.

A-MOST 2014 will bring together researchers and practitioners interested in the topic of Model Based Testing and will focus on three main areas: the models used in model based testing; the processes, techniques, and tools that support model based testing; and the evaluation of model based testing.

The 5th International Workshop on Security Testing (SECTEST 2014)

SECTEST 2014 Website

To improve software security, several techniques, including vulnerability modelling and security testing, have been developed but the problem remains unsolved. On one hand, SECTEST workshop tries to answer how vulnerability modelling can help users understand the occurrence of vulnerabilities so to avoid them, and what the advantages and drawbacks of the existing models are to represent vulnerabilities. At the same time, the workshop tries to understand how to solve the challenging security testing problem, how security testing is different from and related to classical functional testing, and how to assess the quality of security testing. This is in particular interesting since testing the mere functionality of a system alone is already a fundamentally critical task. The objective of SECTEST workshop is to share ideas, methods, techniques, and tools about vulnerability modelling and security testing to improve the state of the art.

The 3rd International Workshop on Combinatorial Testing (IWCT 2014)

IWCT 2014 Website

Combinatorial Testing (CT) is a widely applicable generic method for software verification and validation. In a combinatorial test plan, all interactions between parameters up to a certain level are covered. Studies show that CT can significantly reduce the number of test cases while remaining very effective for fault detection.

This workshop - the third in its series - aims to bring together researchers, developers, users, and practitioners to discuss and exchange ideas and experiences in the development and application of CT methods, techniques, and tools. We invite submissions of high-quality papers presenting original work on both theoretical and experimental aspects of combinatorial testing.

Mutation Analysis (Mutation 2014)

Mutation 2014 Website

Mutation is acknowledged as an important way to assess the fault-finding effectiveness of tests sets. Mutation analysis has mostly been applied at the source code level, but more recently, related ideas have also been used to test artifacts described in a considerable variety of notations and at different levels of abstraction. Mutation ideas are used with requirements, formal specifications, architectural design notations, informal descriptions (e.g. use cases) and hardware. Mutation is now established as a major concept in software and systems V&V and uses of mutation are increasing. The goal of the Mutation workshop is to provide a forum for researchers and practitioners to discuss new and emerging trends in mutation analysis. We invite submissions of both full-length and short-length research papers as well as industry practice papers.

The 4th International Workshop on Regression Testing (Regression 2014)

Regression 2014 Website

Regression testing has received a significant amount of attention from both academics and practitioners during the last 20 years. Even though the use of regression testing techniques often leads to software applications with high observed quality, the repeated execution of test cases can be so costly that it often accounts for about half the cost of maintaining a software system. The regression testing research community also faces the additional challenges of transitioning established techniques into practice, improving the status-quo of the empirical evaluation of techniques, and proposing advanced methods for applying regression testing to modern software that is often complex, rapidly evolving, concurrent, and cloud-based. Regression 2014 will be the 4th series of the international workshop in this area.

Testing: Academic and Industrial Conference Practice and Research Techniques (TAIC PART)


Among computer science and software engineering activities, software testing is a perfect candidate for the union of academic and industrial minds. The Testing: Academic and Industrial Conference - Practice and Research Techniques (TAIC PART) is a unique event that strives to combine the important aspects of a software testing conference, workshop, and retreat. TAIC PART brings together industrialists and academics in an environment that promotes meaningful collaboration on the challenges of software testing. TAIC PART is sponsored by representatives from both industry and academia. The workshop brings together software developers, end users, and academic researchers who work on both the theory and practice of software testing. TAIC PART 2014 is the ninth workshoop in a series of highly successful events. Please consider submitting a paper and/or registering to attend the 2014 edition of TAIC PART so that you can be part of a premier software testing workshop. Individuals with questions about TAIC PART are encouraged to contact one of the conference organizers.

Testing the Cloud (TTC 2014)

TTC 2014 Website

Cloud computing is everywhere, inevitable: originally a layered abstraction of an heterogeneous environment, it has become the paradigm of a large-scale data-oriented system. And while it has some interesting features (easy deployment of applications, resiliency, security, performance, scalability, elasticity, etc.), testing its robustness and its reliability is a major challenge.

The Cloud is an intricate collection of interconnected and virtualised computers, connected services, complex service- level agreements. From a testing perspective, the Cloud is then a complex composition of complex systems, and one can wonder whether anything like a global testing is possible? But if the answer is no, what can we conclude from partial tests? The question of testing this large, network-based, dynamic, composition of computers, virtual machines, servers, services, applications, SLAs, seems particularly difficult. It is especially critical for Cloud vendors: customers trust is indeed crucial for companies implementing Clouds, and they have to ensure that the system has all the security and performance characteristics advertised by the marketing department. This problem is a perfect example of cross concerns between academia and product companies, and it covers a broad range of topics, from software development to code analysis, performance monitoring to formal model for system testing, and so on.

In TTC, we aim at bringing together researchers and practitioners interested in this difficult question of testing the cloud, ie. a complex distributed, dynamic and interconnected system. Hence, we call for regular scientific submissions, but also for industrial experience feedback. We are interested in contributions related to ‘testing the Cloud’ (i.e., testing the Cloud itself, for instance, its infrastructure), ‘testing in the Cloud’ (i.e., testing applications that are deployed in the Cloud), and ‘testing with the Cloud’ (e.g., using the Cloud capabilities to perform stress testing on an application). All the submissions describing approaches used in the industry, defining new methods to facilitate testing or identifying new challenges are relevant.