Reproducibility in

Machine Learning

Reproducibility in ML Workshop, ICML'17

Description

This workshop focuses on issues of reproducibility and replication of results in the Machine Learning community.

Papers from the Machine Learning community are supposed to be a valuable asset. They can help to inform and inspire future research. They can be a useful educational tool for students. They can give guidance to applied researchers in industry. Perhaps most importantly, they can help us to answer the most fundamental questions about our existence - what does it mean to learn and what does it mean to be human? Reproducibility, while not always possible in science (consider the study of a transient astrological phenomenon like a passing comet), is a powerful criteria for improving the quality of research. A result which is reproducible is more likely to be robust and meaningful and rules out many types of experimenter error (either fraud or accidental).

There are many interesting open questions about how reproducibility issues intersect with the Machine Learning community:

• How can we tell if papers in the Machine Learning community are reproducible even in theory? If a paper is about recommending news sites before a particular election, and the results come from running the system online in production - it will be impossible to reproduce the published results because the state of the world is irreversibly changed from when the experiment was ran.

• What does it mean for a paper to be reproducible in theory but not in practice? For example, if a paper requires tens of thousands of GPUs to reproduce or a large closed-off dataset, then it can only be reproduced in reality by a few large labs.

• For papers which are reproducible both in theory and in practice - how can we ensure that papers published in ICML would actually be able to replicate if such an experiment were attempted?

• What does it mean for a paper to have successful or unsuccessful replications?

• Of the papers with attempted replications completed, how many have been published?

• What can be done to ensure that as many papers which are reproducible in theory fall into the last category?

• On the reproducibility issue, what can the Machine Learning community learn from other fields?

Invited Speakers

  • John Langford, Microsoft Research
  • Hugo Larochelle, Google Brain
  • Oriol Vinyals, Google Deepmind
  • Jason Weston, Facebook AI Research
  • Damjan Vukcevic, University of Melbourne
  • Joaquin Vanschoren, Eindhoven University of Technology
  • Robert Williamson, Australian National University


Call for Papers

Our aim in the following workshop is to raise the profile of these questions in the community and to search for their answers. In doing so we aim for papers focusing on the following topics:

• Analysis of the current state of reproducibility in machine learning venues

• Tools to help increase reproducibility

• Evidence that reproducibility is important for science

• Connections between the reproducibility situation in Machine Learning and other fields

• Replications, both failed and successful, of influential papers in the Machine Learning literature.

Important Dates

We will be accepting extended abstracts of 2-6 pages in length, not including references. Submissions should be in the NIPS 2017 format

The referring will be single blind and performed on openreview (https://openreview.net/group?id=ICML.cc/2017/RML). Accepted papers will be presented at a poster session during the workshop. A few papers may be accepted for oral presentation.

Workshop Deadline: June 22nd

Workshop Decision: July 1st

Camera Ready Deadline: August 1st.

Submission Instructions will be posted soon.

Workshop Schedule

August 11th, 2017. (part of ICML in Sydney Australia).

8:30-8:45 Opening remarks

8:45-9:15 Hugo Larochelle, Some Opinions on Reproducibility in ML

Abstract: I'll share my perspective on how we might improve reproducibility in ML research. I'll present my thoughts on how we could incentivize the release of source code along with research papers. Also, I'll make a case for why open and collective research initiatives could be a valuable step in better reproducibility.

9:15 - 10:00 Robert Williamson, Beyond Reproducibility

Abstract: Reproducibility is an admirable and important goal in machine learning research. But it is not enough. In this talk I will outline some of the many other ‘ilities’ and dimensions that need to be considered. The other dimensions include proper management of uncertainty, provenance of data, provenance of the processing, rewindability, sharability of workflows, reusability, reviewability, repurposability, management of legal rights, and proof of trustworthiness, all within an environment where walled gardens do not work. So all of the additional layers that need to be built need to be able to traverse boundaries, including organisational ones; the systems that do the above need to be constructible in small units. I will argue that this long list of challenges is actually feasible and outline an approach to solve these problems as a community.

10-10:30 Coffe/ Poster

10:30-11:00 John Langford, Reproducibility in Machine Learning

11:00-11:30 Nicolas Papernot, Adversarial Machine Learning with CleverHans

Abstract: There is growing recognition that machine learning exposes new security issues in software systems. In this talk, we first articulate a comprehensive threat model for machine learning, then present attacks against model prediction integrity using adversarial examples, and finally introduce CleverHans---the open-source library for benchmarking machine learning systems against these attacks.

Machine learning models were shown to be vulnerable to adversarial examples--subtly modified malicious inputs crafted to compromise the integrity of their outputs. Furthermore, adversarial examples that affect one model often affect another model, even if the two models have different architectures, so long as both models were trained to perform the same task. An attacker may therefore conduct an attack with very little information about the victim by training their own substitute model to craft adversarial examples, and then transferring them to a victim model.

However, reproducibility in adversarial machine learning research is challenged by difficult attack implementations. We present and give a tutorial of CleverHans, the open-source library for adversarial machine learning. With CleverHans, researchers can publish benchmarks that are easily reproducible and hence accelerate the pace of progress in the community.


11:30 - 12:00 Contributed Talk by Xinkun Nie

Why adaptively collected data have negative bias and how to correct for it.

Abstract: From scientific experiments to online A/B testing, the previously observed data often affects how future experiments are performed, which in turn affects which data will be collected. Such adaptivity introduces complex correlations between the data and the collection procedure. In this paper, we prove that when the data collection procedure satisfies natural conditions, then sample means of the data have systematic \emph{negative} biases. As an example, consider an adaptive clinical trial where additional data points are more likely to be tested for treatments that show initial promise. Our surprising result implies that the average observed treatment effects would underestimate the true effects of each treatment. We quantitatively analyze the magnitude and behavior of this negative bias in a variety of settings. We also propose a novel debiasing algorithm based on selective inference techniques. In experiments, our method can effectively reduce bias and estimation error.


12:00-14:00 Lunch

14:00- 14:30 Jason Weston, ParlAI: A Dialog Research Software Platform

Abstract:

We introduce ParlAI (pronounced "par-lay"), an open-source software platform for dialog research implemented in Python, available at http://parl.ai. Its goal is to provide a unified framework for sharing, training and testing of dialog models, integration of Amazon Mechanical Turk for data collection, human evaluation, and online/reinforcement learning; and a repository of machine learning models for comparing with others' models, and improving upon existing architectures. Over 20 tasks are supported in the first release, including popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail, CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated, including neural models such as memory networks, seq2seq and attentive LSTMs.

This is joint work with: Alexander H. Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes and Devi Parikh


14:30-15:00 Joaquin Vanschoren, OpenML: Making machine learning research more reproducible (and easier) by bringing it online.

Abstract: OpenML is an open science platform for machine learning, allowing anyone to easily share data sets, code, and experiments, and collaborate with people all over the world to build better models.

It shows, for any known data set, which are the best models, who built them, and how to reproduce and reuse them in different ways. It is readily integrated into several machine learning environments, so that you can share results with the touch of a button or a line of code, automatically including all details needed for reproducing them. All results that are uploaded are evaluated online and compared to all other results in timelines and leaderboards. All solutions are also open, so that anyone can study previous solutions and build on them. As such, it enables large-scale, real-time collaboration, allowing anyone to explore, build on, and contribute to the combined knowledge of the field.

Ultimately, this provides a wealth of information for a novel, data-driven approach to machine learning, where we learn from millions of previous experiments to assist people while analyzing data and automate processes altogether.

15-15:30 Coffee

15:30-16:00 Damjan Vukcevic, Our Obsession with Dichotomization

Abstract: Many scientists worry about a ‘replication crisis’. The topic is prominently discussed in leading scientific journals. Although such concerns are not new, they have risen to attention in recent years due to a strong push for reproducibility and unprecedented endeavours at replication. For example, the Open Science Collaboration attempted to replicate findings from 100 psychological experiments, but achieved this for only about 40% of them.

While many factors lead to poor replicability, a key issue that has received little attention is the nature of scientific claims. They are usually treated as binary: a drug has an effect or has no effect, eating bacon causes cancer or does not cause cancer, the destructiveness of female-named hurricanes is more or the same as male-named hurricanes. Even the idea of replicating a finding often presumes such a binary answer: did you replicate or did you not?

This is usually not a helpful way to view scientific progress, nor any wider fact-finding endeavour (e.g. certain applications of machine learning) where the truth remains hidden from us. Rather, we need to quantify and summarise our findings with regard to the degree of evidence, which is a more continuous concept.

When viewed in this manner, the underlying uncertainty of our original claims can be laid bare and efforts at replication can be seen in a more cumulative light. Namely, the extent of consistency of evidence between different experiments, and the extent to which we can now be more certain of particular facts about the world.

16:00-17:00 Panel: hosted by Samy Bengio,

Panelists: Hugo Larochelle, Jason Weston, Robert Williamson, John Langford

Organizers

  • Rosemary Nan Ke, (MILA) École Polytechnique de Montréal
  • Anirudh Goyal, (MILA) Université de Montréal
  • Alex Lamb,(MILA) Université de Montréal
  • Joelle Pineau, Mcgill University
  • Samy Bengio, Google Brain
  • Yoshua Bengio,(MILA) Université de Montréal

Slides from the Workshop

Will be added soon!





Photos from the Workshop