Schedule

Abstracts

Joelle Pineau: Data efficient learning for EEG signal analysis using spectral, temporal and spatial information

We consider the problem of developing learning algorithms for the automatic detection and classification of epilepsy. Our goal is to reduce the generalization error in automated seizure detection for both intra- and inter-patient scenarios. We study the potential of deep learning in a supervised learning framework to simultaneously capture spectral, temporal and spatial information from the EEG signal. We apply the method to a large publicly available dataset (CHB-MIT). Results show that our approach can match state-of-the-art performance in terms of sensitivity and false positive rate on patient-specific seizure detection, and exceed previous results by a significant margin on new patients. We also show that our framework is robust to missing channels and different electrode montage, thus making it practical for realistic clinical settings.

Andrew Gelman: Toward Routine Use of Informative Priors

Bayesian statistics is typically performed using noninformative priors but the resulting inferences commonly make no sense and also can lead to computational problems as algorithms have to waste time in irrelevant regions of parameter space. Certain informative priors that have been suggested don't make much sense either. We consider some aspects of the open problem of using informative priors for routine data analysis.

Jörn-Henrik Jacobsen: Structuring Receptive Fields in Convolutional Networks

Training deep convolutional networks typically requires large sets of data which is a problem in e.g. medical images or online learning. Pre-training on another domain may help out, yet will introduce biases as image properties differ from one domain to another. We propose structured receptive field networks that treat features as compositions of continuous basis functions rather than as a handful of discrete pixel values. Such basis functions enable the extraction of explicit geometrical properties and can be designed to deliver varying degrees of invariance on a feature level. This leads to robust models that outperform classical CNNs on small datasets.

Brenden Lake: One-shot learning of simple fractal concepts:

Machine learning has made important advances in categorizing objects in images, yet the best algorithms miss important aspects of how people learn and think about categories. Compared to these systems, people learn richer concepts from fewer examples, including causal models that describe how examples of a category are produced. We probed the boundary of this capacity using a concept learning task with visual fractals. Participants were shown examples generated from a recursive causal process and asked to identify new examples in an extrapolation task. We used a Bayesian program learning model to search the space of programs for the best explanation of the observations, where different programs extrapolate in different ways. People’s judgments were broadly consistent with the model and inconsistent with several alternatives, including a pre-trained deep neural network for object recognition.