Zheng XU 许正

xuzhustc AT gmail  [Curriculum Vitae][Google Scholar]

 Zheng's portrait

Welcome to Zheng's site! I am a graduate student in the department of Computer Science at the University of Maryland, College Park. I am currently working on optimization, machine learning and their applications, under the supervision of Dr.
 Tom Goldstein.

My recent interests are: adaptive stepsize in optimization, primal-dual method in optimization, neural network acceleration, generative adversarial networks for vision and language, and distributed optimization.

I am also interested in some topics I have previously worked with: low rank and sparse models, domain adaptation in computer vision, topic models and tensor methods, 
NER and information extraction, image retrieval, and mining information from large-scale multimedia data.  

Before joining Maryland, I was a Project Officer at Visual Computing Research Group, School of Computer EngineeringNanyang Technological University, Singapore.
 I received my M.Eng. and B.Eng. in the Department of Electronic Engineering and Information Science, the University of Science and Technology of China (USTC), advised by Dr. Chang Wen Chen, Dr. Bin Liu and Dr. Houqiang Li. During my study as a master student, I was fortunate to take a long-term internship  at  Microsoft Research Asia (MSRA) with Dr. Xin-Jing Wang.

I have interned/collaborated with companies: Honda, Amazon, IBM, Rolls-Royce and Microsoft.


News

  • Dec 4-9, travel to NIPS@Long Beach.
  • Nov 29, passed proposal exam, and advanced to candidacy.
  • Nov 7, "Towards Perceptual Image Dehazing by Physics-based Disentanglement and Adversarial Training" has been accepted to AAAI 2018 (acceptance rate 25%), congrats Xitong and all coauthors.
  • Sep 4, "Training Quantized Nets: A Deeper Understanding" has been accepted to NIPS 2017 (acceptance rate 21%), congrats Hao and Soham, and all coauthors.
  • May 30 - Aug 25, intern with Honda research at Mountain View. The draft of my intern work on GAN for knowledge distillation is here
  • Older news