This workshop focuses on how to present papers from the coding perspective so that reproducibility and replication of results in the Machine Learning community becomes easier.
Papers from the Machine Learning community are supposed to be a valuable asset. They can help to inform and inspire future research. They can be a useful educational tool for students. They are the driving force of innovation and differentiation in the industry, so quick and accurate implementation is really critical. On the research side they can help us answer the most fundamental questions about our existence - what does it mean to learn and what does it mean to be human? Reproducibility, while not always possible in science (consider the study of a transient astrological phenomenon like a passing comet), is a powerful criteria for improving the quality of research. A result which is reproducible is more likely to be robust and meaningful and rules out many types of experimenter error (either fraud or accidental). There are many interesting open questions about how reproducibility issues intersect with the Machine Learning community:
Our aim in the following workshop is to raise the profile of these questions in the community and to search for their answers. In doing so we aim for papers focusing on the following topics:
We will accept both short paper (4 pages) and long paper (8 pages) submissions (not including references). Submissions should be in the NIPS 2018 format. A few papers may be selected as oral presentations, and the other accepted papers will be presented in a poster session. There will be no proceedings for this workshop, however, upon the author’s request, accepted contributions will be made available in the workshop website. Submission are single-blind, peer-reviewed on OpenReview (https://openreview.net/group?id=ICML.cc/2018/RML), and open to already published work.
Workshop Paper Submission Deadline: June 5th June 10th (extended)
Workshop Paper Decision: June 20th
Camera Ready Deadline: July 1st
Submission Instructions will be posted soon.
July 14th, Stockholm
8:30-9:00 Animashree Anandkumar & Opening Remarks
9:00-9:30 John Ioannidis
9:30-10:00 Alexandre Gramfort, Reproducible ML: Software challenges, anecdotes and some engineering solutions
10:00-10:30 Coffee break/posters
10:30-11:00 Percy Liang
11:00-11:30 Olivia Guest, Varieties of Reproducibility in Empirical and Computational Domains
11:30-12:00 Alistair Johnson
12:00-12:30 Nicolas Rougier
12:30 - 14:00 Lunch Break
14:00 - 14:40 MLTRAIN Tutorial (Nikolaos Vasiloglou)
14:40 - 15:10 Armand Joulin
15:10 - 15:20 Contributed Talk 1: Realistic Evaluation of Deep Semi-Supervised Learning Algorithms
15:20 - 15:30 Contributed Talk 2: Depth First Learning: Learning to Understand Machine Learning
15:30 - 16:00 Coffee break/posters
16:00 - 16:30 Steve Hsu, Machine Learning, Genomics, and Reproducibility
16:30 - 17:00 Aki Vehtari, Reproducibility and Stan
17:00 - 17:30 Joelle Pineau
17:30-18:30 Panel (Moderator: Joelle Pineau)
Panelists: Olivia Guest, Alistair Johnson, Steve Hsu, Animashree Anandkumar, Armand Joulin
** This list is subject to change depending on speakers/panelists final availability.
[1] Lucic, Mario, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. "Are GANs Created Equal? A Large-Scale Study." arXiv preprint arXiv:1711.10337 (2017).
[2] Melis, Gábor, Chris Dyer, and Phil Blunsom. "On the state of the art of evaluation in neural language models." arXiv preprint arXiv:1707.05589 (2017).
[3] Henderson, Peter, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. "Deep reinforcement learning that matters." arXiv preprint arXiv:1709.06560 (2017).
[4] Nie, Xinkun, Xiaoying Tian, Jonathan Taylor, and James Zou. "Why adaptively collected data have negative bias and how to correct for it." (2017).