Call for Contributions

Contributions must focus on one of the following benchmarks: MBI, DataRaceBench, and MPI-CorrBench .

The contributions include:

The new codes should be notable, either because they experience a use case that is not tested yet, or because they are representative of larger applications (ProxyApps for correctness). They can be correct or intentially incorrect.

We expect a case study on a selection of at least 2 existing tools.

The metric could be a new one or an existing metric not evaluated yet which makes more sense than the ones usually used. The metric should be tested on the existing benchmaks (DataRaceBench or MBI or MPI-CorrBench) either through a script that leverages the existing benchmarks’ harness or through an external evaluation.

The submitter should be the author of the provided tool or have explicit permit to participate to the event. A new version of a previously submitted tool may be re-submitted. The paper must then summarize the changes between the two versions.

The authors must offer a way to validate the results by the reviewing committee. The tool should be freely available (either opensource or through a ready-to use docker image or similar).

The paper should explain how an error was chased down in practice, which approach was used, which tool was used if any and the lesson learned.

All contributions must consist of a short paper (2 to 4 pages) giving a technical description of the contribution and a reproducible artifact. The artifact will be evaluated and considered in the paper's acceptance decision. Submitted papers and artifact will be peer-reviewed by the Reviewing Committee and accepted papers will be published by IEEE Xplore.

Note: Authors contributing to several of the contribution categories may provide only one short paper describing all their contributions.