SSEAT 2011

Workshop on State-space Exploration for Automated Testing

July 21, 2011 (afternoon only)

Co-located with ISSTA 2011

Toronto, ON, Canada

Theme

Testing is the most widely used approach for validating software, but it is labor-intensive and error-prone. Automated testing has the potential to make testing more cost-effective. A number of recent research approaches to automated testing use state-space exploration techniques including explicit-state model checking, symbolic execution, search-based techniques, heuristic-guided exploration, or a combination of techniques. These approaches can be used in various scenarios such as model checking, model-based testing, code-based test case generation, etc. These approaches are implemented in several tools used in both industry and academia. To improve performance, the tools incorporate different methods for state representation, state comparison, function summaries, etc. Some tools work only with the code under test while others can exploit additional information such as state abstraction, state comparison, existing tests, and oracles.

While state-space exploration for automated testing has already shown promising results, a number of challenges remain on this topic, including how to improve performance of tools, how to scale to larger code, how to get a wider adoption in industry, how to handle more advanced language features, how to reduce false alarms, etc. An important issue is also how to compare various tools and techniques since they are typically implemented on different platforms and evaluated on code chosen in an ad-hoc manner.

Goals

One goal of this workshop is to bring together researchers from both industry and academia to identify a set of programs that can be used for comparing various tools and techniques that perform state-space exploration for automated testing.

The other goal is to organize a comparison of SSEAT tools on the collected programs.

The eventual goal is to build a benchmark suite for comparing SSEAT tools, as discussed at SSEAT 2010, SSEAT 2009, and SSEAT 2008, and hopefully to organize a competition of SSEAT tools similar to competitions organized in several other areas.

Topics

The topics of this workshop include but are not limited to the techniques and tools that automate testing using:

    • Model checking
    • Symbolic execution
    • Constraint solving
    • Random exploration
    • Heuristics-based searches
    • Genetic algorithms
    • Combination of techniques

Format

This will be a half-day workshop aimed at identifying a set of programs for comparing various techniques and tools. There will be a small number of short presentations about programs and explicitly allocated time for discussion sessions.

Competition

At the workshop, we will conduct a small "experimental" competition between automated testing tools. Please see our Proposal for an Experimental Competition for details.

Submissions

In addition to the competition, the organizers also invite independent proposals for presentation at the workshop and submission of potential programs and tools for comparison. We are interested in a variety of example programs. They should contain some statements or states that are hard to reach. Such statements may be marked, and interesting states should be characterized in some way as well.

Please email your proposals to sseatorg@googlegroups.com. The proposal should identify who you are, provide a brief description of your work on or interest in the topics of the workshop, and describe at least one program that you propose to be discussed for inclusion in the benchmark suite or describe one tool to be used in the tool comparison. Presentation in the workshop will be by invitation only, decided based on the proposals. We expect up to 15 participants.

You are also welcome to join the mailing list and discuss potential benchmark programs and/or tools on the mailing list.

Dates

Submissions:

Workshop:

open (email organizers if you have questions)

July 21, 2011 (afternoon only)

Organizers