Artifact Evaluation

Submission Deadlines and Artifact Evaluation Schedule

On Artifact Evaluation

It is a common struggle to reproduce experimental results and reuse research code from scientific papers. Voluntary artifact evaluation (AE), introduced successfully at program languages, systems and machine learning conferences, promotes reproducibility of experimental results and encourages code and data sharing to help the community quickly validate and compare approaches. AE enables this by providing authors structured workflows and a common Artifact Appendix to share code and results, and by having an independent committee of evaluators validate the experimental results, and assign artifact evaluation badges.

Authors of accepted papers are invited to formally describe supporting materials (code, data, models, workflows, results) using the standard Artifact Appendix template and submit it together with the materials for evaluation. Note that this submission is voluntary and will not influence the final decision regarding the papers. The goal is to help the authors validate experimental results from their accepted papers by an independent AE Committee in a collaborative way while helping readers find articles with available, functional, and validated artifacts!

The papers that successfully go through AE will receive a set of ACM badges of approval printed on the papers themselves and available as meta information in the ACM Digital Library (it is now possible to search for papers with specific badges in ACM DL). Authors of such papers will need to include an Artifact Appendix of up to 2 pages describing their artifact in the camera-ready paper.

Author-created artifacts relevant to this paper have been placed on a publicly accessible archival repository. A DOI or link to this repository along with a unique identifier for the object is provided.

The artifacts associated with the research are found to be documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation.

The main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author.

Preparing your Artifact Appendix

The authors are expected to prepare and submit an Artifact Appendix with the paper, that describes all the software, hardware and data set dependencies, key results to be reproduced, and how to prepare, run and validated experiments. The authors are encouraged to use the following tex template for the Artifact Appendix, similar to previous to AE in previous years.  This guide provides an explanation of the different fields in the Artifact Appendix.

We encourage the authors to check out the AE FAQs, the Artifact Reviewing Guide, the SIGPLAN Empirical Evaluation Guidelines, and the NeurIPS reproducibility checklist for creating the best possible artifacts for submission! You can find examples of artifacts from previous conferences at this link.

Preparing your experimental workflow

You can skip this step if you want to share your artifacts without the validation of experimental results - in such case your paper can still be entitled for the "artifact available" badge!

We strongly recommend you to provide at least some scripts to build your workflow, all inputs to run your workflow, and some expected outputs to validate results from your paper. You can then describe the steps to evaluate your artifact using Jupyter Notebooks or plain ReadMe files.

Making artifacts available to evaluators

Most of the time, the authors make their artifacts available to the evaluators via GitHub, GitLab, BitBucket or similar private or public service. Public artifact sharing allows "open evaluation" which we have successfully validated at prior conferences. It allows the authors to quickly fix encountered issues during evaluation before submitting the final version to archival repositories.

Other acceptable methods include:

Note that your artifacts will receive the ACM "artifact available" badge only if they have been placed on any publicly accessible archival repository such as Zenodo, FigShare, and Dryad. You must provide a DOI automatically assigned to your artifact by these repositories in your final Artifact Appendix.

Submitting artifacts

Write a brief abstract describing your artifact, the minimal hardware and software requirements, how it supports your paper, how it can be validated and what the expected result is. Do not forget to specify if you use any proprietary software or hardware! This abstract will be used by evaluators during artifact bidding to make sure that they have an access to appropriate hardware and software and have required skills.

Submit the artifact abstract and the PDF of your paper with the Artifact Appendix attached using the AE submission website provided by the event.

Preparing your camera-ready paper

If you have successfully passed AE with at least one of the three AE badges, you will need to add up to 2 pages of your artifact appendix to your camera ready paper while removing all unnecessary or confidential information. This will help readers better understand what was evaluated and how.

If your paper is published in the ACM Digital Library, you do not need to add reproducibility stamps - ACM will add them to your camera-ready paper and will make this information available for search! In other cases, AE chairs will tell you how to add stamps to the first page of your paper.

Acknowledgment

The content on this page has been adapted from pages of previous artifact evaluation efforts at ASPLOS (2020, 2021, 2022, 2023).