FAQ

Do I have to open source my software artifacts?

No, it is not strictly necessary and you can provide your software artifact as a binary. However, in case of problems, reviewers may not be able to fix it and will likely give you a negative score.

Is Artifact evaluation blind or double-blind?

AE is a single-blind process, i.e. authors' names are known to the evaluators (there is no need to hide them since papers are accepted), but names of evaluators are not known to authors. AE chairs are usually used as a proxy between authors and evaluators in case of questions and problems.

How to pack artifacts?

We do not have strict requirements at this stage. You can pack your artifacts simply in a tar ball, zip file, Virtual Machine or Docker image. You can also share artifacts via public services including GitHub, GitLab and BitBucket. Please see our submission guide for more details.

Is it possible to provide a remote access to a machine with pre-installed artifacts?

Only in exceptional cases, i.e. when rare hardware or proprietary software/hardware/benchmarks are required, or VM image is too large or when you are not authorized to move artifacts outside your organization. In such case, you will need to send the access information to the AE chairs via private email or SMS. They will then pass this information to the evaluators.

Can I share commercial benchmarks or software with evaluators?

Please check the license of your benchmarks, data sets and software. In case of any doubts, try to find a free alternative. In fact, we strongly suggest you provide a small subset of free benchmarks and data sets to simplify the evaluation process.

Can I engage with the community to evaluate my artifacts?

Based on the community feedback, we allow open evaluation to let the community validate artifacts which are publicly available at GitHub, GitLab, BitBuckets, etc, report issues and help the authors to fix them. Note, that in the end, these artifacts still go through traditional evaluation process via the AE committee. We successfully validated at ADAPT'16 and CGO/PPoPP'17!


How to automate, customize and port experiments?

From our past experience reproducing research papers, the major difficulty that evaluators face is the lack of a common and portable workflow framework in ML and systems research. This means that each year they have to learn some ad-hoc scripts and formats in nearly all artifacts without even reusing such knowledge the following year. Things get even worse if an evaluator would like to validate experiments using a different compiler, tool, library, data set, operating systems or hardware rather than just reproducing quickly outdated results using VM and Docker images - our experience shows that most of the submitted scripts are not easy to change, customize or adapt to other platform.

That is why we collaborate with the community and ACM to develop a common experimental framework (CK). You can see how CK workflows helped to automate, crowdsource and visualize experiments in the 1st ACM ReQuEST-ASPLOS'18 tournament to co-design Pareto-efficient software/hardware stack for deep learning: CK workflows, ACM proceedings, report and public dashboards with reproducible results. You can find reproduced papers with portable CK workflows and reusable components using the cKnowledge.io portal. Please, follow this guide if you want to convert your artifacts and workflows to the CK format.

Do I have to make my artifacts public if they pass evaluation?

No, you don't have to and it may be impossible in the case of commercial artifacts. Nevertheless, we encourage you to make your artifacts publicly available upon publication, for example, by including them in a permanent repository (required to receive the "artifact available" badge) to support open science as outlined in our vision.

Furthermore, if you make your artifacts publicly available at the time of submission, you may profit from the "public review" option, where you are engaged with the community to discuss, evaluate and use your software. See such examples here (search for "public evaluation").

How to report and compare empirical results?

First of all, you should undoubtedly run empirical experiments more than once (we still encounter many cases where researchers measure execution time only once). and perform statistical analysis. There is no universal recipe how many times you should repeat your empirical experiment since it heavily depends on the type of your experiments, platform and environment. You should then analyze the distribution of execution times as shown in the figure below:

If you have more than one expected value (b), it means that you have several run-time states in your system (such as adaptive frequency scaling) and you can not use average and reliably compare empirical results. However, if there is only one expected value for a given experiment (a), then you can use it to compare multiple experiments. This is particularly useful when running experiments across different platforms from different users as described in this article.

You should also report the variation of empirical results together with all expected values. Furthermore, we strongly suggest you to pre-record results from your platform and provide a script to automatically compare new results with the pre-recorded ones. Otherwise, evaluators can spend considerable amount of time digging out and validating results from "stdout". For example, see how new results are visualized and compared against the pre-recorded ones using some dashboard in the CGO'17 artifact.

How to deal with numerical accuracy and instability?

If the accuracy of your results depends on a given machine, environment and optimizations (for example, when optimizing BLAS, DNN, etc), you should provide a script to automatically report unexpected loss in accuracy above provided threshold as well as any numerical instability.

How to validate models or algorithm scalability?

If you present a novel parallel algorithm or some predictive model which should scale across a number of cores/processors/nodes, we suggest you to provide an experimental workflow that could automatically detect the topology of a user machine, validate your models or algorithm scalability, and report any unexpected behavior.

Is there any page limit for my Artifact Evaluation Appendix?

There is no limit for the AE Appendix at the time of the submission for Artifact Evaluation.

However, there is currently a 2 page limit for the AE Appendix in the camera-ready CGO, PPoPP, ASPLOS and MLSys papers. There is no page limit for the AE Appendix in the camera-ready SC paper. We also expect that there will be no page limits for AE Appendices in the journals willing to participate in the AE initiative.