Author Instructions

Artifact Preparation

You need to prepare the Artifact Appendix describing all software, hardware and data set dependencies, key results to be reproduced, and how to prepare, run and validated experiments. Though it is relatively intuitive and based on our past AE experience and your feedback, we strongly encourage you to check the the Artifact Appendix guide, artifact reviewing guide, the SIGPLAN Empirical Evaluation Guidelines, the NeurIPS reproducibility checklist and AE FAQs before submitting artifacts for evaluation! You can find the examples of Artifact Appendices in the following reproduced papers.

Since the AE methodology is slightly different at different conferences, we introduced the unified Artifact Appendix with the Reproducibility Checklist to help readers understand what was evaluated and how! Furthermore, artifact evaluation sometimes help to discover some minor mistakes in the accepted paper - in such case you have a chance to add related notes and corrections in the Artifact Appendix of your camera-ready paper!

We strongly recommend you to provide at least some scripts to build your workflow, all inputs to run your workflow, and some expected outputs to validate results from your paper. You can then describe the steps to evaluate your artifact using Jupyter Notebooks or plain README files. You can skip this step if you want to share your artifacts without the validation of experimental results - in such case your paper can still be entitled for the "artifact available" badge!

Artifact Submission

Submit the artifact abstract and the PDF of your paper with the Artifact Appendix attached using the AE submission website.

The (brief) abstract should describe your artifact, the minimal hardware and software requirements, how it supports your paper, how it can be validated and what the expected result is. Do not forget to specify if you use any proprietary software or hardware! This abstract will be used by evaluators during artifact bidding to make sure that they have an access to appropriate hardware and software and have required skills.

Most of the time, the authors make their artifacts available to the evaluators via GitHub, GitLab, BitBucket or similar private or public service. Public artifact sharing allows optional "open evaluation" which we have successfully validated at ADAPT'16 and ASPLOS-REQUEST'18. It allows the authors to quickly fix encountered issues during evaluation before submitting the final version to archival repositories. Other acceptable methods include:

Note that your artifacts will receive the ACM "artifact available" badge only if they have been placed on any publicly accessible archival repository such as Zenodo, FigShare, and Dryad. You must provide a DOI automatically assigned to your artifact by these repositories in your final Artifact Appendix!