Home
TAU SEMINARS CALENDAR (require Zimbra login)
This website organizes the speed reading activity of the TAU group, whose aim is for PhD student and post-docs to build quickly their bibliography. The format is the following:
- Every week up to 3 papers are chosen and reviewed by 3 students with only 3 slides during 3 minutes + 7 minutes of questions.
- We then go for tea/coffee to pursue the discussion.
The review should follow NIPS review form template.
The slides should follow this template:
- slide 1: Background (definition of the problem, motivational applications. previous related papers)
- slide 2: Material and methods (description of the data, algorithms, experimental setting, methodology)
- slide 3: Results and conclusion.
Tips to select papers: The paper should rate in your opinion 5/5 along these criteria:
- Impact/soundness: The paper should preferably be accepted in a good conference (orals are best) or journal; check the number of citations in Google scholar; do not necessarily read latest papers unless you come back from a conference: papers that survived a few years and are highly cited are likely to be more important and sounder.
- Relevance: The paper should be relevant to your thesis subject. You can make occasional exceptions to gain “breadth”.
- Usefulness: It should be crystal clear (1) What problem the paper is solving. (2) What is new/better compared to previous methods, AND/OR there should be new/good insight to understand the problem(s). The application motivation should also be clear.
- Clarity: if the paper is too hard for you to understand SKIP IT: in a few months you will likely be able to read it. This is like surfing a wave: if you cannot paddle hard enough to catch the wave, it is wasted effort.
Date and time in 2017
Reviewer/
presenter
Lisheng
Lisheng
Benjamin
Benjamin
Lisheng
Benjamin
Benjamin
Lisheng
Olivier / Diviyan
Berna
Diviyan
Benjamin
Victor
Lisheng
Benjamin
Isabelle
Lisheng
Berna
Benjamin
Priyanka
Lisheng
Berna
Corentin
Guillaume Charpiat
Benjamin
Victor
Thomas
Lisheng
Berna
Guillaume Doquet
Corentin
Theophile
Benjamin
Lisheng
Diviyan
Rumana
Aris
Paper and authors
Learning to communicate with deep multi-agent reinforcement learning (by Jakob Foerster et al.)
Learning to learn by gradient descent by gradient descent (by Marcin Andrychowicz et al.)
Learning to poke by poking: experimental learning of intuitive physics (Pulkiy AGRAWAL et al.)
Value iteration network, Aviv Tamar et al.
Cooperative Inverse Reinforcement Learning (by Dylan Hadfield-Menell et al.)
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (by Sergey Ioffe, Christian Szegedy)
Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks (by Tim Salimans Diederik P. Kingma)
Introduction to Reinforcement Learning (I)
Distingushing Cause from Effect Using Observational Data: Methods and Benchmarks (M. Mooij et al.)
Estimating Causal Structure Using Conditional DAG Model (by Chris. J. Oates et al.)
Conditional distribution variability measures for causality detection (by Jose A. R. fonollosa)
Dropout: A Simple Way to Prevent Neural Networks from Overfitting, Nitish Srivastava et al.
Transfer Metric Learning by Learning Task Relationships
Introduction to Reinforcement Learning (end)
Deep Residual Learning for image recognition, Kaiming He et al. (2015)
Mastering the game of Go with deep neural networks and tree search, D. Silver et al. (2016)
Human-level control through deep reinforcement learning, V. Mnih et al. (2015)
Recovering from Selection Bias in Causal and Statistical Inference, E. Bareinboim et. al. (2014)
(Paris time)
Thursday december 16, 16h15
Thursday december 16, 16h15
Thursday december 16, 16h15
Thursday december 16, 16h15
Tuesday January 10, 17h
Tuesday January 10, 17h
Tuesday January 10, 17h
Tuesday January 17, 16h
Tuesday January 17, 16h
Tuesday January 17, 16h
Tuesday January 24, 16h
Tuesday January 24, 16h
Tuesday January 31, 16h
Tuesday January 31, 16h
Tuesday February 7, 16h
Tuesday February 7, 16h
Tuesday February 28, 16h
Tuesday February 28, 16h
Tuesday March 14, 16h
Tuesday March 14, 16h
Tuesday March 21, 16h
Tuesday March 21, 16h
Tuesday March 28, 16h
Tuesday March 28, 16h
Tuesday April 18, 16h
Tuesday April 18, 16h
Tuesday May 2, 16h
Tuesday May 2, 16h
Tuesday May 2, 16h
Tuesday May 2, 16h
Tuesday May 23, 16h
Tuesday May 23, 16h
Tuesday May 23, 16h
Tuesday May 30, 16h
Tuesday June 13, 16h
Tuesday June 13, 16h
Tuesday June 13, 16h
Regularization of Neural Networks using DropConnect, Li Wan et al. 2013
Image Style Transfer Using Convolutional Neural Networks (CVPR 2016)
- Sequential Model-Based Optimization for General Algorithm Configuration by F. Hutter et al.
- Efficient Robust Automated Machine Learning by M. Feurer et al.
Could a Neuroscientist Understand a Microprocessor? by Eric Jonas and Konrad Paul Kording
Martin Arjovsky and Leon Bottou. Towards Principled Methods for Training Generative Adversarial Networks, January 2017.
Generative Adversarial Networks, Ian J. Goodfellow et al.
"DeepStack: Expert-Level Artificial Intelligence in Heads-Up No-Limit Poker" by Matej Moravčík, Martin Schmid
Tangent Prop - A Formalism for Specifying Selected
Invariances in an Adaptive Network (1991)
Learning a Parametric Embedding by Preserving Local Structure
PathNet: Evolution Channels Gradient Descent in Super Neural Networks (by Chrisantha Fernando et al. 2017)
Extended Topological Metrics for the Analysis of Power Grid Vulnerability (by E. Bompard, E. Pons and D. Wu? 2012)
Sinkhorn Distances: Lightspeed Computation of Optimal Transportation Distances (by Marco Cuturi, 2013)
Memory-efficient Backpropagation Through Time by Audrunas Gruslys et al. (2016)
Understanding deep learning requires rethinking generalization by C. Zhang et al. (ICLR 2017)
Learning to act by predicting the future, Alexey Dosovitskiy (ICLR 2017)
- Neural Architecture Search with Reinforcement Learning by Barret Aoph and Quoc V. Le (ICLR 2017)
- Designing Neural Network Architectures using Reinforcement Learning by B. Baker et al. (ICLR 2017)
Entropy-SGD : Biasing gradient descent into wide valleys by P. Chaudhari et al. (ICLR 2017)
Latent Dirichlet Allocation, David M. Blei et al. (JMLR 2003)