Speakers

Jacob Devlin (Google)

Jacob Devlin is a Senior Research Scientist at Google. At Google, his primary research interest is developing fast, powerful, and scalable deep learning models for information retrieval, question answering, and other language understanding tasks. From 2014 to 2017, he worked as a Principle Research Scientist at Microsoft Research, where he led Microsoft Translate's transition from phrase-based translation to neural machine translation (NMT). He also developed state-of-the-art on-device models for mobile NMT. Mr. Devlin was the recipient of the ACL 2014 Best Long Paper award and the NAACL 2012 Best Short Paper award. He received his Master's in Computer Science from the University of Maryland in 2009, advised by Dr. Bonnie Dorr.

André Martins (Unbabel)

André Martins is the Head of Research at Unbabel, a research scientist at Instituto de Telecomunicações, and an invited professor at Instituto Superior Técnico in the University of Lisbon. He received his dual-degree PhD in Language Technologies in 2012 from Carnegie Mellon University and Instituto Superior Técnico. His research interests include natural language processing, machine learning, deep learning, and optimization. He received a best paper award at the Annual Meeting of the Association for Computational Linguistics (ACL) for his work in natural language syntax, and a SCS Honorable Mention at CMU for his PhD dissertation. He is one of the co-founders and organizers of the Lisbon Machine Learning Summer School (LxMLS).

Rico Sennrich (Edinburgh)

Rico Sennrich is a lecturer in machine learning at the Institute for Adaptive and Neural Computation, University of Edinburgh. He received his PhD in Computational Linguistics from the University of Zurich in 2013, and has since worked at the University of Edinburgh. His recent research, funded by a SNSF post-doctoral fellowship, the EU H2020 programme, and industry collaborations, has focused on modelling linguistically challenging phenomena in machine translation, including grammaticality, productive morphology, domain effects, discourse, and pragmatic aspects. His work on syntax-based and neural machine translation has resulted in regular top-ranked submissions to the annual WMT shared translation task.

Yulia Tsvetkov (CMU)

I am an assistant professor in the Language Technologies Institute, School of Computer Science at Carnegie Mellon University. My research interests lie at or near the intersection of natural language processing, machine learning, and linguistics. Prior to joining LTI, I was a postdoc in the Stanford NLP Group, and got my PhD from CMU. Research projects in my group currently focus on multilingualism and low-resource NLP, interpretability of deep learning, controllable text generation, and NLP for social good.