Reproducibility in

Machine Learning

Reproducibility in ML Workshop, ICML'17

Description

This workshop focuses on issues of reproducibility and replication of results in the Machine Learning community.

Papers from the Machine Learning community are supposed to be a valuable asset. They can help to inform and inspire future research. They can be a useful educational tool for students. They can give guidance to applied researchers in industry. Perhaps most importantly, they can help us to answer the most fundamental questions about our existence - what does it mean to learn and what does it mean to be human? Reproducibility, while not always possible in science (consider the study of a transient astrological phenomenon like a passing comet), is a powerful criteria for improving the quality of research. A result which is reproducible is more likely to be robust and meaningful and rules out many types of experimenter error (either fraud or accidental).

There are many interesting open questions about how reproducibility issues intersect with the Machine Learning community:

• How can we tell if papers in the Machine Learning community are reproducible even in theory? If a paper is about recommending news sites before a particular election, and the results come from running the system online in production - it will be impossible to reproduce the published results because the state of the world is irreversibly changed from when the experiment was ran.

• What does it mean for a paper to be reproducible in theory but not in practice? For example, if a paper requires tens of thousands of GPUs to reproduce or a large closed-off dataset, then it can only be reproduced in reality by a few large labs.

• For papers which are reproducible both in theory and in practice - how can we ensure that papers published in ICML would actually be able to replicate if such an experiment were attempted?

• What does it mean for a paper to have successful or unsuccessful replications?

• Of the papers with attempted replications completed, how many have been published?

• What can be done to ensure that as many papers which are reproducible in theory fall into the last category?

• On the reproducibility issue, what can the Machine Learning community learn from other fields?

Invited Speakers

  • John Langford, Microsoft Research
  • Hugo Larochelle, Google Brain
  • Oriol Vinyals, Google Deepmind
  • Jason Weston, Facebook AI Research
  • Damjan Vukcevic, University of Melbourne
  • Joaquin Vanschoren, Eindhoven University of Technology
  • Robert Williamson, Australian National University


Call for Papers

Our aim in the following workshop is to raise the profile of these questions in the community and to search for their answers. In doing so we aim for papers focusing on the following topics:

• Analysis of the current state of reproducibility in machine learning venues

• Tools to help increase reproducibility

• Evidence that reproducibility is important for science

• Connections between the reproducibility situation in Machine Learning and other fields

• Replications, both failed and successful, of influential papers in the Machine Learning literature.

Important Dates

We will be accepting extended abstracts of 2-6 pages in length, not including references. Submissions should be in the NIPS 2017 format

The referring will be single blind and performed on openreview (https://openreview.net/group?id=ICML.cc/2017/RML). Accepted papers will be presented at a poster session during the workshop. A few papers may be accepted for oral presentation.

Workshop Deadline: June 22nd

Workshop Decision: July 1st

Camera Ready Deadline: August 1st.

Submission Instructions will be posted soon.

Workshop Schedule

August 11th, 2017. (part of ICML in Sydney Australia).

8:30-8:45 Opening remarks

8:45-9:15 Hugo Larochelle, Some Opinions on Reproducibility in ML

9:15 - 10:00 Robert Williamson, Beyond Reproducibility

10-10:30 Coffee / Poster

10:30-11:00 John Langford, Reproducibility in Machine Learning

11:00-11:30 Nicolas Papernot, Adversarial Machine Learning with CleverHans

11:30 - 12:00 Contributed Talk by Xinkun Nie, Why adaptively collected data have negative bias and how to correct for it.

12:00-14:00 Lunch

14:00- 14:30 Jason Weston, ParlAI: A Dialog Research Software Platform

14:30-15:00 Joaquin Vanschoren, OpenML: Making machine learning research more reproducible (and easier) by bringing it online.

15-15:30 Coffee

15:30-16:00 Damjan Vukcevic, Our Obsession with Dichotomization

16:00-17:00 Panel: hosted by Samy Bengio,

Panelists: Hugo Larochelle, Jason Weston, Robert Williamson, John Langford

Summary of the Panel

Notes, may not be exhaustive.

  • Have an extra reviewer on each paper who is only responsible for scoring the reproducibility of a paper. (John Langford)
  • When area chairs give certain papers oral, have a quota that at least some of those selected papers must have published code. This is easier in a single-blind conference, but could also be done in a double-blind conference.
    • Requiring all papers to have published code might force industrial researchers to flee those conferences, since many pieces of code will not be publishable.
  • Joelle Pineau's ML class will have a project where students try to reproduce ICLR submissions.
  • Should there be a formal mechanism for tying papers to attempted replications? For example, on arxiv.
  • What is the incentive for student's to spend time working on reproducing other's work?
    • Right now, many junior researchers do reimplement major papers to try to get noticed (i.e. get hired by a big lab)
    • In general PhD students will want to focus on novel research and not focus their PhD on reproducing others' work.
  • Could we change our culture around citations so that the standard practice is to cite ideas (from the paper) and code separately? This would mean that researchers who publish code with their work could get 2 citations instead of 1. If a paper is published and the public code is only produced publicly, then (potentially) half of the credit from citations would go to the person who publicly reproduced the work.
  • Early neural network research had a reputation for being a "dark art", where only some practitioners could get it working. Some of these methods may have been technically reproducible, but extremely sensitive to tuning and hard to generalize. Where does this fit into our conversation about reproducibility?

Organizers

  • Rosemary Nan Ke, (MILA) École Polytechnique de Montréal
  • Anirudh Goyal, (MILA) Université de Montréal
  • Alex Lamb,(MILA) Université de Montréal
  • Joelle Pineau, Mcgill University
  • Samy Bengio, Google Brain
  • Yoshua Bengio,(MILA) Université de Montréal