About the Artifact Evaluation Process
This year, AIIDE will include an Artifact Evaluation, a chance for authors of accepted papers & posters to submit a companion software artifact with their paper. Example types of artifact include stand-alone software, web applications, datasets, plug-ins/extensions for existing tools, social media bots, and others.
The purposes of the artifact evaluation are:
All submitted artifacts will be stringently peer reviewed through a separate review process from the main conference. Artifact evaluation is an optional round of additional reviewing available to authors of accepted papers and posters; the quality of artifacts will not affect acceptance decisions for main conference papers in any way. One companion artifact may be submitted per paper.
FAQ: But don't we already have demos and Playable Experiences? Why introduce another thing?
The Artifact Evaluation is distinct from the Playable Experiences and Demos tracks in that the software artifacts are tightly coupled with a research result documented primarily in a conference paper. Passing the artifact evaluation process is a signal to other researchers in the field that your software is robust, reusable, and that your research claims are reflected in the artifact (thus promoting reproducibility).
To help authors distinguish between these submission types, we summarize their criteria below:
FAQ: Can I submit to artifact evaluation if I haven't submitted to the main conference?
No. Artifact evaluation is only conducted on artifacts associated with accepted papers in the main conference. Authors of accepted papers will be contacted regarding how to submit their artifact for evaluation after paper notifications are sent out.
FAQ: Do all accepted papers need to submit an associated artifact for evaluation?
No. Artifact evaluation is optional, though encouraged, for authors of accepted papers. Artifact evaluation follows a separate peer review process from papers, with a separate (though potentially overlapping) committee of reviewers from the main conference program committee. Artifacts that pass this review process will be publicized on the AIIDE website and announced at the main conference.
FAQ: Where can I learn more about artifact evaluation?
Artifact evaluation is a new process being adopted at conferences across computer science, and it is new for AIIDE in 2018. You can learn more about artifact evaluation here.
Feedback on this process is welcome, and questions can be directed to the artifact evaluation co-chairs or the program chair. More details regarding this process will be made available closer to artifact evaluation submission deadline.
If you are the author of an accepted paper, please follow the guidelines below to submit your artifact. These guidelines come from the official AEC webpage.
Submissions via Easychair: https://easychair.org/conferences/?conf=aiide18
Create a new submission with the same title as your accepted paper. In the submission form, include two things:
There is no “paper” to submit.
At the artifact site URL, give us access to:
See below for additional details.
Irrespective of the nature of the artifacts, authors should create a single Web page (whether on their site or a third-party file upload service) that contains the artifact, the paper, and all necessary instructions.
For artifacts where this would be appropriate, it would be helpful to also provide a self-contained bundle (including instructions) as a single file (.tgzor .zip; please avoid exotic compressors) for convenient offline use: imagine the reviewer who wants to download a single file to expand and work with during a train or bus commute.
If it would be helfpul, please feel free to include a video that demonstrates the artifact running or explaining how it should be run.
The artifact submission thus consists of just the URL and any credentials required to access the files.
We ask that, during the evaluation period, you not embed any analytics or other tracking in the Web site for the artifact or, if you cannot control this, that you not access this data. This is important for maintaining the confidentiality of reviewers. If for some reason you cannot comply with this, please notify the chairs immediately.
Authors should strongly consider one of the following methods to package the software components of their artifacts (though the AEC is open to other reasonable formats as well):
In all cases, authors should make a genuine effort to not learn the identity of the reviewers. This may mean turning off “call home” features or analytics, or only using systems with high enough usage that AEC accesses will not stand out. In all cases where tracing is unavoidable, the authors should warn the reviewers in advance so the reviewers can try to take adequate safeguards.
Non-code artifacts should preferably be in open document formats. For documents and reports, we invite the authors to use PDF, ODF, or RTF. We invite the authors to submit experimental data and results in CSV (preferred option), JSON, or XML file formats. In the special case that authors have to submit non-standard data formats, they should also provide suitable readers.