Welcome



    Bo DAI
    Computational Science and Engineering
    College of Computing
    Georgia Institute of Technology

    Email: bodai AT gatech.edu
     
    Google Scholar                                                       

I am currently a Ph.D. candidate in Computational Science and Engineering at Georgia Tech, supervised by Prof. Le SongMy principal research interests lie on core machine learning methodology for large-scale structured data.  Recently, I am focusing on developing effective statistical models and efficient algorithms for learning from a massive volume of complex, structured, uncertain and high-dimensional data, e.g.,  distributions, structures, dynamics, and so on.

My recent work includes

  • Reinforcement learning:  design effective algorithms for exploiting the recursive structure in the dynamics.
  • Large-scale nonparametric machine learning: develop efficient algorithms for machine learning methods, especially nonparametric methods,  to handle hundreds of millions of data.     
  • Structured input and output: build effective models for capturing the structures information in input and output, e.g., binaries, sequences, trees, and graphs.


News
  • 2018/01: Our two papers, "Boosting the Actor with Dual Critic" and "Syntax-Directed Variational Autoencoder for Structured Data", has been accepted to ICLR2018.
  • 2017/12: Our paper, "Multi-scale Nystrom Method", has been accepted to AISTATS2018.
  • 2017/12: Our paper, "Syntax-Directed Variational Autoencoder for Molecule Generation", won the Best Paper Award in NIPS2017 Machine Learning for Molecules and Materials workshop . 
  • 2017/11: We present our papers, 
    1. "Smoothed Dual Embedding Control" on NIPS2017 Deep Reinforcement Learning symposium.
    2. "Learning from Conditional Distributions via Dual Embeddings" on NIPS2017 Learning on Distributions, Functions, Graphs and Groups workshop .
    3. "Syntax-Directed Variational Autoencoder for Molecule Generation" on NIPS2017 Machine Learning for Molecules and Materials workshop .
  • 2017/09: Our paper, "Deep Hyperspherical Learning", has been accepted to NIPS2017.
  • 2017/05: Our two papers, "Stochastic Generative Hashing" and "Iterative Machine Teaching" , have been accepted to ICML2017.
  • 2017/05: Start my internship at Microsoft Research, Redmond, with Lin Xiao, Lihong Li, and Jianshu Chen.
  • 2017/02: Our paper, "Recurrent Hidden Semi-Markov Model", has been accepted to ICLR2017.
  • 2016/12: Our paper, "Learning from Conditional Distributions via Dual Embeddings", has been accepted to AISTATS2017.
  • 2016/05: Start my internship at Google Research, NYC, with Sanjiv Kumar and Ruiqi Guo.
  • 2016/05: Our paper, "Provable Bayesian Inference via Particle Mirror Descent", won the AISTATS2016 Best Student Paper Award.
  • 2016/04: Our paper, "Discriminative Embeddings of Latent Variable Models for Structured Data", has been accepted to ICML2016.
  • 2016/01: Our paper, "Provable Bayesian Inference via Particle Mirror Descent", has been accepted to AISTATS2016.
  • 2015/11: Thank Adobe for providing me travel grants to NIPS!
  • 2015/11: We present our paper, "Provable Bayesian Inference via Particle Mirror Descent", on NIPS2015 workshop "Advances in Approximate Bayesian Inference" and "Scalable Monte Carlo Methods for Bayesian Analysis of Big Data". 
  • 2015/06: Our paper, "Scalable Bayesian Inference via Particle Mirror Descent", is up on arXiv.
  • 2014/09: Our paper, "Scalable Kernel Methods via Doubly Stochastic Gradients", has been accepted to NIPS2014.
  • 2014/04: Our two papers, "Nonparametric Estimation of Multi-View Latent Variable Models" and "Transductive Learning with Multi-class Volume Approximation", have been accepted to ICML2014.
  • 2014/02: Our paper, "Information-theoretic Semi-supervised Metric Learning via Entropy Regularization", has been accepted to Neural Computation.