Imitation, Intent, and Interaction (I3)

Invited Speakers

Invited Speakers

Kris M. Kitani is an assistant research professor in the Robotics Institute at Carnegie Mellon University. He received his BS at the University of Southern California and his MS and PhD at the University of Tokyo. His research projects span the areas of computer vision, machine learning and human computer interaction. In particular, his research interests lie at the intersection of first-person vision, human activity modeling and inverse reinforcement learning. His work has been awarded the Marr Prize honorable mention at ICCV 2017, best paper honorable mention at CHI 2017, best technical paper at W4A 2017, best application paper ACCV 2014 and best paper honorable mention ECCV 2012.

I'm directing Language and Interaction Research (LAIR) Group. I received a Ph.D. in Computer Science from Duke University. Before joining MSU in January 2003, I was a research staff member at IBM T. J. Watson Research Center. My research interests include natural language processing, situated dialogue agents, artificial intelligence, human-robot communication, and intelligent user interfaces. My recent work has focused on grounded language processing to facilitate situated communication with robots and other artificial agents. Our main objectives are to enable natural interaction between humans and robots and to allow robots to continuously learn from humans about the joint environment and tasks. I'm also affliated with the MSU Cognitive Science Program.

Hal Daumé III is a professor in Computer Science at the University of Maryland, College Park; he is currently on leave at Microsoft Research, New York City. He holds joint appointments in UMIACS and Linguistics. He was previously an assistant professor in the School of Computing at the University of Utah. His primary research interest is in developing new learning algorithms for prototypical problems that arise in the context of natural language processing and artificial intelligence, with a focus on interactive systems, utilizing background knowledge, and fairness. He associates himself most with conferences like ACL, ICML, NeurIPS and EMNLP, where he has published over 100 papers. He has received several "best of" awards, including at ACL 2018, NAACL 2016, NeurIPS 2015, CEAS 2011 and ECML 2009. He has been program chair for NAACL 2013 (and chair of its executive board), and will be program chair for ICML 2020; he was an inaugural diversity and inclusion co-chair at NeurIPS 2018. He earned his PhD at the University of Southern California with a thesis on structured prediction for language (his advisor was Daniel Marcu). He spent the summer of 2003 working with Eric Brill in the machine learning and applied statistics group at Microsoft Research. Prior to that, he studied math (mostly logic) at Carnegie Mellon University, while working at the Language Technology Institute.

Stefano Ermon is an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory and a fellow of the Woods Institute for the Environment.

His research is centered on techniques for scalable and accurate inference in graphical models, statistical modeling of data, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability.

Natasha Jaques is a PhD candidate working on Affective Machine Learning: problems at the intersection of machine learning (ML), deep learning, emotion, mental health, and social interaction. She is interested in methods that allow ML models to learn generalizable representations across a range data or tasks, including transfer learning, multi-task learning, and intrinsic motivation. Recently, she's begun investigating how social and emotional inductive biases can improve generalization and learning. She is experienced in traditional machine learning, deep learning, Bayesian methods, causal inference and reinforcement learning.