Junyoung Chung (정준영)
Research Scientist
Montreal Institute for Learning Algorithms
Department of Computer Science
Université de Montréal
[firstname].[lastname][at] umontreal [dot] ca
BIO
I am a Research Scientist at Google DeepMind. I contributed to several large-scale projects such as AlphaStar, AlphaCode, and Veo. Before joining Google DeepMind, I received my PhD in machine learning from the MILA lab at the University of Montreal in 2018, where I was fortunate to be advised by Professor Yoshua Bengio.
Research Interest
I have a broad interest in various topics in deep learning, from deep RL (AlphaStar), LLMs (AlphaCode) to large-scale diffusion models (Veo).
Language modeling (PhD thesis)
RL (AlphaStar)
Code generation (AlphaCode)
Video generation (Veo)
Here is a link to my CV
News
Veo was shipped at Google I/O 2024.
AlphaCode paper has been published in Science [webpage], and it was highlighted as one of the breakthroughs of the year [webpage].
I am thrilled to share our new paper about AlphaCode [blog][pdf]
Our paper, Step-unrolled Denoising Autoencoders for Text Generation, got accepted to ICLR 2022 [pdf]
AlphaStar paper has been published in Nature [webpage]
My PhD dissertation became publicly accessible [pdf]
The AlphaStar team has released the first result on StarCraft II [DeepMind blog]
I joined DeepMind as a research scientist.
Our paper, Hierarchical Multiscale Recurrent Neural Networks, got accepted to ICLR 2017 [pdf]
I did a fall Internship at Google Research, Mountain View, 9/2016 - 12/2016
Our paper, A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation, got accepted to ACL 2016 (oral) [pdf][code]
I was a visiting PhD student at NYU machine learning group (CILVR, working with Professor Kyunghyun Cho), 10/2015 - 4/2016
Our paper, Detecting Interrogative Utterances with Recurrent Neural Networks, got accepted as the best workshop paper at NeurIPS 2015 Workshop on Machine Learning for Spoken Language Understanding and Interactions
Our paper, A Recurrent Latent Variable Model for Sequential Data, got accepted to NeurIPS 2015 [pdf][code]
I did a summer internship at Microsoft Research, Redmond, 6/2015 - 9/2015
Our paper, Gated Feedback Recurrent Neural Networks, got accepted to ICML 2015 [pdf]
Talks
AlphaCode
Google Speech, virtual, 4/2022
Google Translate, virtual, 3/2022
HKUST, virtual, 3/2022
Naver, Pangyo, 12/2022
Yonsei University, Seoul, 12/2022
Korea University, Seoul, 12/2022
AlphaStar
Kakao Brain, virtual, 6/2021
UNIST, virtual, 5/2021
Hanyang University, Seoul, 8/2019
Neural Network Architectures for Time-Series and Machine Learning Problems, AI Korea Summer School @Yonsei University, Seoul, 7/2019
Towards Hierarchical Multiscale Recurrent Neural Networks
Toyota Institute of Technology at Chicago, Chicago, 6/2017
Naver, Pangyo, 9/2017
KakaoBrain, Pangyo, 9/2017
Hierarchical Multiscale Recurrent Neural Networks @Google Research, Mountain View, 10/2016
A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation @ACL'16, Berlin, 8/2016
A Character-Level Decoder for Neural Machine Translation and Multiscale Recurrent Neural Networks @DeepMind, London, 8/2016
Deep Recurrent Neural Networks for Sequence Modelling
Samsung Advanced Institute of Technology, Suwon, 4/2016
Hyundai Motors, Uiwang, 4/2016
Detecting Interrogative Utterances with Recurrent Neural Networks @NeurIPS Workshop on SLUI, Montreal, 12/2015
Variational Recurrent Neural Networks @New York University, New York City, 11/2015
Gated Feedback Recurrent Neural Networks @ICML'15, Lille, 7/2015