Real Neurons & Hidden Units:

Future directions at the intersection of neuroscience and artificial intelligence @ NeurIPS 2019

Saturday, December 14, 2019

Room: East Ballroom A

Workshop talks+slides available in 4 segments online here (panel discussion at end of 4th segment)

Neuroscience and AI have a long and entangled history, and modern progress in both fields draws on one another. However, the recent explosion of AI and the development of evermore powerful experimental methods in neuroscience invite us to revisit the extent to which brains and machines can work the same way. Will understanding how the brain works really lead us to build better AI? Will building better AI really help us understand how the brain works? This workshop intends to address these contentious topics head on, and provide a forum for healthy discussion about future directions in both AI and neuroscience, and crucially, in research endeavours that aim to establish meaningful links between the two.

Workshop Description

Recent years have witnessed an explosion of progress in AI. With it, a proliferation of experts and practitioners are pushing the boundaries of the field without regard to the brain. This is in stark contrast with the field's transdisciplinary origins, when interest in designing intelligent algorithms was shared by neuroscientists, psychologists and computer scientists alike. Similar progress has been made in neuroscience where novel experimental techniques now afford unprecedented access to brain activity and function. However, it is unclear how to maximize them to truly advance an end-to-end understanding of biological intelligence. The traditional neuroscience research program, however, lacks frameworks to truly advance an end-to-end understanding of biological intelligence. For the first time, mechanistic discoveries emerging from deep learning, reinforcement learning and other AI fields may be able to steer fundamental neuroscience research in ways beyond standard uses of machine learning for modelling and data analysis. For example, successful training algorithms in artificial networks, developed without biological constraints, can motivate research questions and hypotheses about the brain. Conversely, a deeper understanding of brain computations at the level of large neural populations may help shape future directions in AI. This workshop aims to address this novel situation by building on existing AI-Neuro relationships but, crucially, outline new directions for cutting-edge artificial systems and the next generation of neuroscience experiments.

Schedule

CONTRIBUTED PAPERS & PUBLIC COMMENTS (OpenReview link)

INVITED TALKS' ABSTRACTS

LIVE STREAMING

  • Saturday, December 14, 2019
  • Room: East Ballroom A

8:15 - 8:30 Opening Remarks

8:30 - 9:00 Invited Talk: Hierarchical Reinforcement Learning: Computational Advances and Neuroscience Connections. Doina Precup [recording]

9:00 - 9:30 Invited Talk: Deep learning without weight transport. Tim Lillicrap

9:30-9:45 Contributed talk: Eligibility traces provide a data-inspired alternative to backpropagation through time. Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Anand Subramoney, Robert Legenstein, Wolfgang Maass

9:45 - 10:30 COFFEE BREAK + Posters

10:30 - 11:00 Invited Talk: Computing and learning in the presence of neural noise. Cristina Savin [recording]

11:00 - 11:30 Invited Talk: Universality and individuality in neural dynamics across large populations of recurrent networks. David Sussillo

11:30 -11:45 Contributed talk: How well do deep neural networks trained on object recognition characterize the mouse visual system? Santiago A. Cadena, Fabian H. Sinz, Taliah Muhammad, Emmanouil Froudarakis, Erick Cobos, Edgar Y. Walker, Jake Reimer, Matthias Bethge, Andreas Tolias, Alexander S. Ecker

11:45 - 12:00 Contributed talk: Functional Annotation of Human Cognitive States using Graph Convolution Networks Yu Zhang, Pierre Bellec

12:00 - 14:00 LUNCH BREAK

14:00-14:30 Invited Talk: Simultaneous rigidity and flexibility through modularity in cognitive maps for navigation Ila Fiete [recording]

14:30 - 15:00 Invited Talk: Theories for the emergence of internal representations in neural networks: from perception to navigation Surya Ganguli

15:00-15:15 Contributed talk: Adversarial Training of Neural Encoding Models on Population Spike Trains Poornima Ramesh, Mohamad Atayi, Jakob H Macke

15:15-15:30 Contributed talk: Learning to Learn with Feedback and Local Plasticity Jack Lindsey

15:30-16:15 COFFEE BREAK + Posters

16:15-16:45 Posters

16:45-17:15 Invited Talk: Sensory prediction error signals in the neocortex. Blake Richards [recording]

17:15 -18:00 Panel: A new hope for neuroscience

Panelists: Yoshua Bengio, Adrienne Fairhall, Ila Fiete, Surya Ganguli, Tim Lillicrap, Doina Precup, Blake Richards, Cristina Savin, David Sussillo

Contributed Papers

Instructions for Poster Presenters

- There are no poster boards at workshops. Posters are taped to the wall with the special tabs that the NeurIPS staff needs to order.

- Please make your posters 36W x 48H inches or 90 x 122 cm.

- Posters should be on light weight paper, not laminated.

Revisions & Next Steps

As announced in the call for papers, we aim for a transparent and open review process that promotes community discussion about the work submitted to this workshop. In the coming days, the following settings in OpenReview will be changed:

  • All reviews will remain public, and public commenting will be allowed.
  • Author names from accepted papers will be made public, and those of rejected papers will remain anonymous, but all papers and reviews will be open to public commenting.
  • Reviewers will remain anonymous.
  • Authors will be able to address reviews using the public forum, and their comments will be clearly tagged as being posted by the paper’s authors.

For accepted papers: we ask for a camera-ready version of the paper to be uploaded to OpenReview by Oct.31. This final version must have de-anonymized authors and is allowed one extra page to address reviewer (and public) comments. [total of 5 pages including citations]


Call for Papers, Review and Decisions Process

About Review and Decision Process

Although authors can address reviewers' comments on the OpenReview forum (see below), there is no rebuttal period and acceptance decision is final. Each submitted paper was reviewed by at least two reviewers, and assigned a score from 1 to 5 on the following criteria: importance, technical rigor, clarity, overall evaluation, and the degree of intersection with the topic of the workshop (i.e. the interface of AI and Neuroscience). Papers with no topical intersection were rejected even if their scores were otherwise high. A first ordering of papers was established using the sum of all scores, and a detailed editorial review of roughly the bottom half of papers in this ordering was carried out by workshop organisers. At this time, each paper was assessed to ensure reviews were fair, and further studied individual scores. Diversity of authors and of topics was considered, variance in reviews was taken into account, and papers with strong topical orientation were favored. The hope is that this will promote visibility of work that is even in progress or development stages, however tackle true questions at the intersection of Neuroscience and AI.

Call for Papers

Paper submission is now closed. Please see above for information about the review and decision process. For accepted contributions please follow the guidelines in Contributed Papers Section.

We invite contributions at the intersection between neuroscience and AI. In particular, we encourage work that identifies novel questions about the brain, which are informed by the recent successes of AI, and that establish key brain mechanisms that hold promise for further advancement of AI. The specific areas of interest are broad, including the study of recurrent dynamics, inductive biases to guide learning, global versus local learning rules, interpretability of network activity, connectivity structure, and more. Importantly, for a submission to be admissible, the topic must be at the intersection of AI and Neuroscience, i.e. they must leverage findings from one field to advance the other, or address common questions.

We highly encourage contributions that make their code or data publicly available. All accepted submissions will be invited to present a poster. A select number will be also invited to present short talks. Each accepted contribution must be presented by one of the authors in-person during the workshop (accommodations for extraneous circumstances can be arranged). Workshop contributions do not preclude future publication as proceedings will not be archived. However submissions will be made publicly available and open to community comments at the time of the workshop. Work that was previously published in an ML venue is not permitted.

Submissions must be within four pages (including references and appendices) and must be submitted in PDF format via the submission website. Reviews will be double-blind and the authors should ensure that submitted papers preserve their anonymity (i.e. anonymous author names, no reference to prior work that in any way identifies authors, etc.). Authors are encouraged but not required to use the NeurIPS style template (in anonymous mode). Evaluation will be according to the following criteria: relevance to the workshop theme, quality and originality of the work, clarity.

Important information about review process, public comments & access

In the spirit of transparency, during the review process submissions and reviews will be public, but anonymous. Submission authorship is made public upon decision. Final decisions will take into account review scores and diversity along several axes including topic, seniority, and gender. Authored comments by the public can be made throughout.

Poster Prizes

With the support of UNIQUE, we can award 3 poster prizes, evaluated on the relevance to workshop, quality of presentation, and quality of scientific work. The winners were announced on the day of the workshop:

  • Jiaqi Shang (University of Washington): Does the neuronal noise in cortex help generalization?
  • Jeffrey Siedar Cheng : Augmenting Supervised Learning by Meta-learning Unsupervised Local Rules
  • Owen Marschall: Evaluating biological plausibility of learning algorithms the lazy way

Invited talks' abstracts

Tim Lillicrap: Deep learning without weight transport

Recent advances in machine learning have been made possible by employing the backpropagation-of-error algorithm. Backprop enables the delivery of detailed error feedback across multiple layers of representation to adjust synaptic weights, allowing us to effectively train even very large networks. Whether or not the brain employs similar deep learning algorithms remains contentious; how it might do so remains a mystery. In particular, backprop uses the weights in the forward pass of the network to precisely compute error feedback in the backward pass. This way of computing errors across multiple layers is fundamentally at odds with what we know about the local computations of brains. We will describe new proposals for biologically motivated learning algorithms that are as effective as backpropagation without requiring weight transport.


Cristina Savin: Computing and learning in the presence of neural noise

One key distinction between artificial and biological neural networks is the presence of noise, both intrinsic, e.g. due to synaptic failures, and extrinsic, arising through complex recurrent dynamics. Traditionally, this noise has been viewed as a ‘bug’, and the main computational challenge that the brain needs to face. More recently, it has been argued that circuit stochasticity may be a ‘feature', in that can be recruited for useful computations, such as representing uncertainty about the state of the world. Here we lay out a new argument for the role of stochasticity during learning. In particular, we use a mathematically tractable stochastic neural network model that allows us to derive local plasticity rules for optimizing a given global objective. This rule leads to representations that reflect both task structure and stimuli priors in interesting ways. Moreover, in this framework stochasticity is both a feature, as learning cannot happen in the absence of noise, and a bug, as the noise corrupts neural representations. Importantly, the network learns to use recurrent interactions to compensate for its negative effects, and maintain robust circuit function.


David Sussillo: Universality and individuality in neural dynamics across large populations of recurrent networks

Currently neuroscience is undergoing a data revolution, where many thousands of neurons can be measured at once. These new data are extremely complex and will require a major conceptual advance in order to infer the underlying brain computations from them. In order to handle this complexity, systems neuroscientists have begun training deep networks, in particular recurrent neural networks (RNNs), in order to make sense of these newly collected, high-dimensional data. These RNN models are often assessed by quantitatively comparing neural dynamics of the model with the brain. However, the nature of the detailed neurobiological inferences one can draw from such comparisons remains elusive. For example, to what extent does training RNNs to solve simple tasks, prevalent in neuroscientific studies, uniquely determine the low-dimensional dynamics independent of neural architectures? Or alternatively, are the learned dynamics highly sensitive to different neural architectures? Knowing the answer to these questions has strong implications on whether and how to use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of RNN architectures commonly used to solve neuroscientifically motivated tasks and characterize their dynamics. We find the geometry of the dynamics can be highly sensitive to different network architectures. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold: the topological structure of fixed points, transitions between them, limit cycles, and aspects of the linearized dynamics, often appears universal across all architectures. Overall, this analysis of universality and individuality across large populations of RNNs provides a much needed foundation for interpreting quantitative measures of dynamical similarity between RNN and brain dynamics.

Ila Fiete: Simultaneous rigidity and flexibility through modularity in cognitive maps for navigation

Generalizably solving complex problems involves decomposing them into simpler components and combining these parts in effective ways to solve new instances. The hippocampal complex has been a rich playground for understanding how the brain constructs and combines modular structures for flexible computation. This is because the hippocampus and associated areas generate strikingly explicit emergent representations of abstract (latent) low-dimensional variables in the domain of spatial navigation that form the elements of spatial inference but are not directly specified by the world. I will describe recent progress in characterizing the rigid nature of these representations through unsupervised discovery of latent low-dimensional structure from population data and show how these rigid and simple low-dimensional circuits can generate, in a highly flexible way, representations and memory of different (spatial and non-spatial) variables, as seen in recent experiments. I will conclude with an overview of how understanding these circuits in the realm of navigation gives insights into their potential use in higher-dimensional non-spatial cognitive representations as well.

Blake Richards: Sensory prediction error signals in the neocortex

Many models have postulated that the neocortex implements hierarchical inference system, whereby each region sends predictions of the inputs it expects to lower-order regions, allowing the latter to learn from any prediction errors. The combining of top-down predictions with bottom-up sensory information to generate errors that can then be communicated across the hierarchy is critical to credit assignment in deep predictive learning algorithms. Indirect experimental evidence supporting a hierarchical prediction system in the neocortex comes from both human and animal work. However, direct evidence for top-down guided prediction errors in the neocortex that can be used for deep credit assignment during unsupervised learning remains limited. Here, we address this issue with 2-photon calcium imaging of layer 2/3 and layer 5 pyramidal neurons in the primary visual cortex of awake mice during passive exposure to visual stimuli where unexpected events occur. To assess the evidence for top-down guided prediction errors we recorded from both the somatic compartments, and the apical dendrites in layer 1, where a large number of top-down inputs are received. We find evidence for a diversity of prediction error signals depending on both the stimulus type and cell type. These signals can be learnt in some cases, and in turn, they appear to drive some learning. This data will help us to both understand hierarchical inference in the neocortex, and potentially guide new unsupervised techniques for machine learning.


Invited speakers

Doina Precup

McGill/Mila/DeepMind

Tim Lillicrap

DeepMind/UCL

Organizers

Guillaume Lajoie

Université de Montréal / Mila

Eli Shlizerman

University of Washington

Maximilian Puelma Touzel

Université de Montréal / Mila

Jessica Thompson

Université de Montréal / Mila

Konrad Kording

University of Pennsylvania

Advisors

University of Washington

Université de Montréal / Mila

Sponsors