Sample efficient algorithms, task agnostic learning, and transfer learning for CV/RL
Understanding and improving neural networks through physics and mathematics
W. Liu, Y. You, Y. Li, J. Shang, “Gradient-based Wang–Landau Algorithm: A Novel Sampler for Output Distribution of Neural Networks over the Input Space,” In International Conference on Machine Learning (ICML), 2023.
H. Wang, W. Liu, A. Bocchieri, Y. Li, “Can multi-label classification networks know what they don’t know?,” accepted by NeurIPS 2021 main conference.
W. Liu, X. Wang, J. Owens, Y. Li, “Energy-based Out-of-distribution Detection,” accepted by NeurIPS 2020 main conference.
W. Liu, L. Wei, J. Sharpnack, J. Owens, “Unsupervised Object Segmentation with Explicit Localization Module,” accepted by NeurIPS 2019 workshop. [PDF]
W. Liu , E.Barsoum, J. Owens, “Object Localization and Motion Transfer Learning with a CapsNet," accepted by NeuraIPS 2019 workshop. [PDF]
K. Azizzadenesheli, W. Liu , Z. Lipton, A. Anandkumar, “Sample-Efficient Deep RL with Generative Adversarial Tree Search”, accepted by ICML 2018 workshop [PDF].
K. Azizzadenesheli, B. Yang, W. Liu , E. Brunskill, Z. Lipton, A. Anandkumar, “Surprising Negative Results for Generative Adversarial Tree Search”, accepted by NeuraIPS 2018 workshop.
In order to better understand model's classification criteria and quantify its performance, we propose a novel sampler that can sample the entire input space. We thus show that it is empirically practical to study the model in this most general dataset.
W. Liu, Y. You, Y. Li, J. Shang, “Gradient-based Wang–Landau Algorithm: A Novel Sampler for Output Distribution of Neural Networks over the Input Space,” In International Conference on Machine Learning (ICML), 2023.
We are first to explore the out-of-distribution detection in multi-label classification problem. We propose a new energy score for this problem and show theoretically and empirically that it can distinguish in-and out-of-distribution samples.
H. Wang, W. Liu, A. Bocchieri, Y. Li, “Can multi-label classification networks know what they don’t know?,” accepted by NeurIPS 2021 main conference.
We propose a unified framework for OOD detection using an energy score. We show theoretically and empirically that energy scores better distinguish in- and out-of-distribution samples than the traditional approach using the softmax scores.
W. Liu, X. Wang, J. Owens, Y. Li, “Energy-based Out-of-distribution Detection,” accepted by NeurIPS 2020 main conference.
We propose a novel architecture that iteratively discovers and segments out the objects of a scene based on the image reconstruction quality. We show that our localization module improves the quality of the segmentation, especially on a challenging background.
W. Liu, L. Wei, J. Sharpnack, J. Owens, “Unsupervised Object Segmentation with Explicit Localization Module,” accepted by NeurIPS 2019 workshop. [PDF]
The goal of this research project is to investigate how CapsNet is able to derive the coordinates of object(s) on an image and transfer the learned motion to new objects,
W. Liu , E.Barsoum, J. Owens, “Object Localization and Motion Transfer Learning with a CapsNet” [PDF]
The goal of this research project was to explore how the deep generative models improve the sample efficiency in deep reinforcement learning.
We present a highly adaptive model by combining previous video prediction model and generative adversarial networks (GANs). Our method can generate realistic looking images with long-term dependencies, and we show that the generated images speed up the training of the DQN.
K. Azizzadenesheli, W. Liu , Z. Lipton, A. Anandkumar, “Sample-Efficient Deep RL with Generative Adversarial Tree Search”, accepted by ICML 2018 workshop [PDF].
K. Azizzadenesheli, B. Yang, W. Liu , E. Brunskill, Z. Lipton, A. Anandkumar, “Surprising Negative Results for Generative Adversarial Tree Search”, accepted by NeurIPS 2018 workshop.