Tutorial at CogSci

NCPW15 participants may also be interested in the full-day Tutorial Workshop on Contemporary Deep Neural Network Models at the Annual Meeting of the Cognitive Science Society on Wednesday, August 10.  The purpose of this workshop is to allow deeper engagement with the algorithms and software tools behind the science that will be discussed in the plenary presentations at NCPW15.

Note that registration for NCPW15 does not include registration for the Cognitive Science tutorial.
To participate in the tutorial, you need to register for the Cognitive Science Meeting.



Schedule

09:00 - 09:10  Welcome and overview (Jay McClelland)

09:10 - 09:50  Unsupervised Deep Learning (Marco Zorzi)

09:50 - 10:30  Convolutional Neural Networks (Niko Kriegeskorte)

10:30 - 10:40  - Short Break

10:40 - 11:20  Deep Reinforcement Learning (Tim Lillicrap)

11:20 - 12:00   Long-Short-Term Memory and Differentiable Neural Computers (Greg Wayne)

12:00 - 01:00  - Lunch

01:00 - 01:45  Tensorflow Tutorial (Steven Hansen)

01:45 - 03:15  Breakouts [bring your own laptop!]:
  • Tools for unsupervised deep learning and applications to psychological research (Alberto Testolin)

Aim: We will learn how to train a 3-layer hierarchical generative model (deep belief network) on a large set of handwritten digit images. To speed-up learning, we will learn how to use a simple GPU implementation. We will then analyze the internal representations developed by the network, by plotting the receptive fields of the hidden neurons at different levels of the hierarchy and by reading-out the distributed representations.

Required hardware/software: To use our GPU implementation, an NVIDIA graphic card supporting the CUDA architecture is required: At least 1GB of dedicated memory is suggested in order to train the full model on the complete dataset; however, reduced models should fit lower-memory GPUs. Simulations also require MATLAB (version > 2012) with the Parallel Computing Toolbox properly installed, or Python (version > 2.7) with CUDAMat (version 1.15) properly installed. However, the CPU (not GPU) code also runs on Octave. The routines for network analysis are written in MATLAB, and should work as well with Octave.

Resources: We warmly suggest the participants to download in advance the required source code and datasets, in order to avoid delays and server bottlenecks during the breakout session.
The complete source code (training and testing routines) can be found at: http://ccnl.psy.unipd.it/research/deeplearning
The MNIST dataset can be found at: http://yann.lecun.com/exdb/mnist/
Useful open-source reference papers can be found here and here.

  • Convolutional Neural Networks and applications to neuroscience (Niko Kriegeskorte)

    Aim: This breakout will mostly focus on how to validate and compare the representations learned by deep convolutional networks with data provided by modern neuroimaging techniques.

    Required hardware/software: No particular hardware/software is required.

    Resources: Useful background papers describing the general approach discussed in the breakout can be found here and here.

  • Backpropagation and how depth affects computation (Andrew Saxe)

  • Implementing Deep Q learning (Steven Hansen)
            Aim: This breakout will focus on doing reinforcement learning in TensorFlow, with a focus on implementation details.

            

            Required hardware/software: TensorFlow, OpenAI Gym (Installation walkthough at the start of the breakout)

            Resources: Useful background paper can be found here, and the ever useful TensorFlow API 

            starter code 

            installing openai gym

  • Reinforcement learning in continuous action spaces (Tim Lillicrap)
Aim: This breakout will focus on doing reinforcement learning in TensorFlow, with a focus on continuous action spaces.
            
Required hardware/software: TensorFlow.

Resources
: At the bottom of this page you can find the required source code ("deeplr.ipynb"). You can also find a useful reference paper about Asynchronous Methods for Deep Reinforcement Learning.
  • Recurrent Neural Networks and Long-Short Term Memory networks (Greg Wayne)

    Aim: We will learn how to play around with different types of recurrent networks.

    Required hardware/software: Simulations are written in Python.

    Resources: At the bottom of this page you can find the required source code for both the RNN ("my_rnn.ipynb") and LSTM ("my_lstm.ipynb") models.

03:15 - 04:00 Plenary Discussion
Ċ
NCPW 15,
Aug 10, 2016, 10:37 AM
Ċ
NCPW 15,
Aug 10, 2016, 12:35 PM
ċ
backprop_breakout.zip
(5k)
NCPW 15,
Aug 10, 2016, 11:06 AM
ċ
deeprl.ipynb
(72k)
NCPW 15,
Aug 10, 2016, 10:35 AM
ċ
my_lstm.ipynb
(63k)
NCPW 15,
Aug 10, 2016, 6:32 AM
ċ
my_lstm.py
(6k)
NCPW 15,
Aug 10, 2016, 11:18 AM
ċ
my_rnn.ipynb
(63k)
NCPW 15,
Aug 10, 2016, 6:30 AM
ċ
my_rnn.py
(6k)
NCPW 15,
Aug 10, 2016, 11:18 AM
Comments