Welcome



   
   Bo DAI
    Computational Science and Engineering
    College of Computing
    Georgia Institute of Technology

    Email: bodai AT gatech.edu


I am currently a Ph.D. candidate in Computational Science and Engineering at Georgia  Tech, supervised by Prof. Le SongMy principal research interests lie in developing effective statistical models and efficient algorithms for learning from a massive volume of complex, structured, uncertain and high-dimensional data. In particularly, I am focusing on core machine learning methodology, including stochastic optimization, reinforcement learning, nonparametric methods, and approximate Bayesian inference.


News
  • 2017/05: Our two papers, "Stochastic Generative Hashing" and "Iterative Machine Teaching" , have been accepted to ICML2017.
  • 2017/05: Start my internship at Microsoft Research, Redmond, with Lin Xiao, Lihong Li, and Jianshu Chen.
  • 2017/02: Our paper, "Recurrent Hidden Semi-Markov Model" , has been accepted to ICLR2017.
  • 2017/01: Our paper, "Learning from Conditional Distributions via Dual Embeddings", has been accepted to AISTATS2017.
  • 2016/05: Start my internship at Google Research, NYC, with Sanjiv Kumar and Ruiqi Guo.
  • 2016/05: Our paper, "Provable Bayesian Inference via Particle Mirror Descent", won the AISTATS2016 Best Student Paper Award.
  • 2016/04: Our paper, "Discriminative Embeddings of Latent Variable Models for Structured Data", has been accepted to ICML2016.
  • 2016/01: Our paper, "Provable Bayesian Inference via Particle Mirror Descent", has been accepted to AISTATS2016.
  • 2015/11: Thank Adobe for providing me travel grants to NIPS!
  • 2015/11: We present our paper, "Provable Bayesian Inference via Particle Mirror Descent", on NIPS2015 workshop "Advances in Approximate Bayesian Inference" and "Scalable Monte Carlo Methods for Bayesian Analysis of Big Data". 
  • 2015/06: Our paper, "Scalable Bayesian Inference via Particle Mirror Descent", is up on arXiv.
  • 2014/09: Our paper, "Scalable Kernel Methods via Doubly Stochastic Gradients", has been accepted to NIPS2014.
  • 2014/04: Our two papers, "Nonparametric Estimation of Multi-View Latent Variable Models" and "Transductive Learning with Multi-class Volume Approximation", have been accepted to ICML2014.
  • 2014/02: Our paper, "Information-theoretic Semi-supervised Metric Learning via Entropy Regularization", has been accepted to Neural Computation.