Ongoing projects
Acquire knowledge by interacting with the world
Many autonomous systems, such as personalized robotic assistants, are required to interact with people. A natural way to communicate with robots is via natural languages. However, human languages are inherently complex: how autonomous systems can effectively understand humans' instructions and explore the environment safely?
We propose a computational model in the context of instruction following: the autonomous system is designed to directly reason about the structure of the instruction to learn robust and interpretable representations for language grounding.
Tsung-Yen Yang, Andrew S. Lan, Karthik Narasimhan. "Robust and Interpretable Grounding of Spatial References with Relation Networks." Findings of Conference on Empirical Methods in Natural Language Processing (Findings of EMNLP), 2020, paper
Tsung-Yen Yang*, Michael Hu*, Yinlam Chow, Peter J. Ramadge, Karthik Narasimhan. "Safe Reinforcement Learning with Natural Language Constraints." arXiv, 2020, paper, demo (*Equal contribution)
Provide provable safety guarantees during deployment and training
Many autonomous systems, such as self-driving cars and industrial robots, are complex. In order to deal with this complexity, researchers are increasingly using reinforcement learning (RL) for designing control policies. However, there is one issue that limits RL's widespread deployment in the real-world system: how autonomous systems can learn safely?
We propose an algorithm for learning constraint-satisfying policies in the context of reinforcement learning with constraints: the autonomous system maximizes a reward while using projection onto constraint sets to ensure constraint satisfaction with provable performance guarantees.
Tsung-Yen Yang, Justinian Rosca, Karthik Narasimhan, Peter J. Ramadge. "Accelerating Safe Reinforcement Learning with Constraint-mismatched Policies." International Conference on Machine Learning (ICML), 2021, paper
Tsung-Yen Yang, Justinian Rosca, Karthik Narasimhan, Peter J. Ramadge. "Projection-Based Constrained Policy Optimization." International Conference on Learning Representations (ICLR), 2020, paper, website
Selected research activities
I develop a privacy-preserving technique using generative adversarial networks to learn informative representations without sacrificing the user's privacy. The approach is tested in the Massive Open Online Courses (MOOCs) dataset and shown to be effective.
Tsung-Yen Yang, Christopher Brinton, Prateek Mittal, Mung Chiang, and Andrew Lan. "Learning Informative and Private Representations via Generative Adversarial Networks." IEEE International Conference on Big Data (Big Data), 2018
I also collaborate with other researchers on machine learning for education.
Tsung-Yen Yang, Christopher G. Brinton, Carlee Joe-Wong, and Mung Chiang. "Behavior-based grade prediction for MOOCs via time series neural networks." IEEE Journal of Selected Topics in Signal Processing, 2017