Alex Lamb
I am currently a senior researcher at Microsoft Research (MSR-NYC) under John Langford. This ICML tutorial (https://icml.cc/virtual/2023/tutorial/21561) is a good overview of what I've been working on.
I completed my PhD in Computer Science at the University of Montreal advised by Yoshua Bengio and a recipient of the Twitch PhD Fellowship 2020.
While a graduate student I have completed internships at Google Brain Tokyo with David Ha and Preferred Networks with Takeru Miyato. I have also worked at the National Institute of Informatics with Tarin Clanuwat.
In 2018 I completed my Ms.C. at the University of Montreal with Aaron Courville. Before joining the University of Montreal, I was an applied research scientist at Amazon (2013-2015), where I worked on developing new algorithms for demand forecasting.
Research Interests
My research is on the intersection of developing new algorithms for machine learning and new applications. In the area of algorithms, I'm particularly interested in (1) making deep networks more modular and richly structured and (2) improving the generalization performance of deep networks, especially across shifting domains. I am particularly interested in techniques which use functional inspiration from the brain and psychology to improve performance on real tasks.
In terms of applications of Machine Learning, I'm interested in pretty much everything. My most recent applied work has been on historical Japanese documents and has resulted in KuroNet, a publicly released service which makes classical Japanese documents (more) understandable to readers of modern Japanese. At Amazon, I worked on systems for estimating how much products will sell in the future. As an undergraduate, I developed text classifiers for Twitter to help measure and monitor flu outbreaks.
Education
Ph.D. Computer Science – University of Montreal. 2018-Present.
M.Sc. Computer Science – University of Montreal. 2016-2018.
B.Sc. Computer Science and Applied Mathematics and Statistics – Johns Hopkins University. 2011-2013.
Publications
Conference and Journal Publications:
Recurrent Independent Mechanisms. Anirudh Goyal, Alex Lamb, Shagun Sodhani, Jordan Hoffmann, Sergey Levine, Yoshua Bengio, Bernhard Scholkopf. ICLR 2021 Spotlight Oral.
Neural Function Modules with Sparse Arguments: A Dynamic Approach to IntegratingInformation across Layers. Alex Lamb, Anirudh Goyal, Agnieszka Słowik, Michael Mozer,Philippe Beaudoin, Yoshua Bengio. AISTATS 2021. 29.8% Acceptance Rate
GraphMix: Regularized Training of Graph Neural Networks for Semi-Supervised Learning. Vikas Verma, Meng Qu, Alex Lamb, Yoshua Bengio, Juho Kannala, Jian Tang. AAAI 2021.
Combining Top-Down and Bottom-Up Signals with Attention over Modules. Sarthak Mittal, Alex Lamb, Anirudh Goyal, Vikram Voleti, Murray Shanahan, Guillaume Lajoie, Michael Mozer, Yoshua Bengio. ICML 2020. 21.8% Acceptance Rate
KuroNet: Regularized Residual U-Nets for End-to-End Kuzushiji Character Recognition. Alex Lamb, Tarin Clanuwat, Asanobu Kitamoto. Springer-Nature Computer Science 2020.
KaoKore: A Pre-modern Japanese Art Facial Expression Dataset. Yingtao Tian, Chikahiko Suzuki, Tarin Clanuwat, Mikel Bober-Irizar, Alex Lamb, Asanobu Kitamoto. ICCC 2020.
SketchTransfer: A New Dataset for Exploring Detail-Invariance and the Abstractions Learned by Deep Networks. Alex Lamb, Sherjil Ozair, Vikas Verma, David Ha. WACV 2020. 34.6% Acceptance Rate
On Adversarial Mixup Resynthesis. Chris Beckham, Sina Honari, Alex Lamb, Vikas Verma, Farnoosh Ghadiri, R. Devon Hjelm, Christopher Pal. NeurIPS 2019. 21.2% Acceptance Rate
State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations. Alex Lamb, Jonathan Binas, Anirudh Goyal, Sandeep Subramanian, Ioannis Mitliagkas, Denis Kazakov, Yoshua Bengio, Michael C Mozer. ICML 2019. Long Oral, 5.0% Acceptance Rate
Manifold Mixup: Learning Better Representations by Interpolating Hidden States. Alex Lamb*, Vikas Verma*, Christopher Beckham, Amir Najafi, Aaron Courville, Ioannis Mitliagkas, David Lopez-Paz, Yoshua Bengio. ICML 2019. 22.6% Acceptance Rate
Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing too much Accuracy. Alex Lamb*, Vikas Verma*, David Lopez-Paz. AiSec 2019. 23.8% Acceptance Rate
KuroNet: Pre-Modern Japanese Kuzushiji Character Recognition with Deep Learning. Alex Lamb*, Tarin Clanuwat*, Asanobu Kitamoto. ICDAR 2019. Oral, 12.9% Acceptance Rate
Interpolation Consistency Training for Semi-Supervised Learning. Vikas Verma, Alex Lamb, Juho Kannala, Yoshua Bengio, David Lopez-Paz. IJCAI 2019. 17.9% Acceptance Rate
End-to-End Pre-Modern Japanese Character (Kuzushiji) Spotting with Deep Learning. Tarin Clanuwat, Alex Lamb, Asanobu Kitamoto. Information Processing Society of Japan Conference on Digital Humanities 2018. Best Paper Award (1/60 accepted papers)
GibbsNet: Iterative Adversarial Inference for Deep Graphical Models. Alex Lamb, Devon Hjelm, Yaroslav Ganin, Joseph Paul Cohen, Aaron Courville, Yoshua Bengio. NeurIPS 2017
Adversarially Learned Inference. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville. ICLR 2017
Professor Forcing: A New Algorithm for Training Recurrent Networks. Alex Lamb*, Anirudh Goyal*, Ying Zhang, Saizheng Zhang, Aaron Courville, Yoshua Bengio. NeurIPS 2016
Separating Fact from Fear: Tracking Flu Infections on Twitter. Alex Lamb, Michael J. Paul, Mark Dredze. NAACL 2013
Pre-print and Workshop Papers:
Deep Learning for Classical Japanese Literature. Tarin Clanuwat, Mikel Bober-Irizar, Asanobu Kitamoto, Alex Lamb, Kazuaki Yamamoto, David Ha. NeurIPS Creativity Workshop 2019.
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations. Alex Lamb, Jonathan Binas, Anirudh Goyal, Dzmitry Serdyuk, Sandeep Subramanian, Ioannis Mitliagkas, Yoshua Bengio. Arxiv.
Learning Generative Models with Locally Disentangled Latent Factors. Alex Lamb*, Brady Neal*, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Ioannis Mitliagkas. Arxiv.
ACtuAL: Actor-Critic Under Adversarial Learning. Anirudh Goyal, Nan Rosemary Ke, Alex Lamb, Devon Hjelm, Chris Pal, Joelle Pineau, Yoshua Bengio. Arxiv.
Demand Forecasting Via Direct Quantile Loss Optimization. Kari Torkkola, Ru He, Wen-Yu Hua, Alex Lamb, Murali Balakrishnan Narayanaswamy, Zhihao Cen. US Patent. P36059-US.
Discriminative Regularization for Generative Models. Alex Lamb, Vincent Dumoulin, Aaron Courville. CVPR Deepvision Workshop 2016.
Variance Reduction in SGD by Distributed Importance Sampling. Guillaume Alain, Alex Lamb, Chinnadhurai Sankar, Aaron Courville, Yoshua Bengio. ICLR Workshop 2016.
Investigating Twitter as a Source for Studying Behavioral Responses to Epidemics. Alex Lamb, Michael J. Paul, Mark Dredze. AAAI Fall Symposium on Information Retrieval and Knowledge Discovery in Biomedical Text 2012.
Writings (for general public)
I have written a few non-academic articles aimed at the general public.
Article on thegradient.pub:
Article on MILA blog:
Teaching
I created a Youtube Channel on Machine Learning with 13 videos, over 39500 views, over 715 subscribers, and over 160,000 minutes watched.
I served as a teaching assistant and gave a guest lecture for Aaron Courville's course on Deep Learning (Winter 2017).
IJCAI 2020 Tutorial on Modularity and Generalization (upcoming).
I have given many public lectures introducing my research:
Deep Learning course at the University of Tokyo (Fall 2018) - Manifold Mixup.
Machine Learning Tokyo (Fall 2019) - Adversarial Mixup Resynthesis.
Tokyo Data Science (Fall 2019) - Recurrent Independent Mechanisms.
Mentoring
While I was an applied research scientist on Amazon's forecasting team, I supervised two summer PhD interns: Sholeh Forouzan (received and accepted full time offer at Amazon) in 2014 and Pyry Takala (founded TrueAI) in 2015.
Datasets and Code for Research
SketchTransfer:
Manifold Mixup and its Applications:
Press Coverage of Research
Several of my research projects have been covered by the mainstream press.
Japanese Medieval Document Recognition:
By the Book: AI Making Millions of Ancient Japanese Texts More Accessible (NVIDIA Blog)
Secrets of billions of ancient Japanese texts being uncovered by AI (9news)
Choosing AI benchmark tasks to benefit other fields: Starting with Japanese Literature (MILA Blog)
Twitter Flu Analysis Research:
Manifold Mixup:
Youtube Video by Yannic Kilcher explaining the algorithm.
Academic and Community Service
I have served as a reviewer for the following conferences:
Neurips (2016-2019)
ICML (2019)
ICLR (2019-2020)
UAI (2019)
I have co-organized four academic workshops:
Reproducibility in Machine Learning (RML). Nan Rosemary Ke, Alex Lamb, Anirudh Goyal, Olexa Bilaniuk, Yoshua Bengio. ICLR 2019
Workshop on Efficient Credit Assignment in Deep Learning and Deep Reinforcement Learning. Anirudh Goyal, Alex Lamb, Nan Rosemary Ke, Aaron Courville, Konrad Kording, Yoshua Bengio. ICML 2018
Reproducibility in Machine Learning (RML). Nan Rosemary Ke, Alex Lamb, Peter Henderson, Anirudh Goyal, Aaron Courville, Chris Pal, Hugo Larochelle, Oriol Vinyals, Yoshua Bengio. ICML 2018
Reproducibility in Machine Learning. Nan Rosemary Ke, Anirudh Goyal, Alex Lamb, Joelle Pineau, Samy Bengio, and Yoshua Bengio. ICML 2017
15k USD prize pool
338 competitors
293 teams
2652 submissions