Chengxi Ye (叶承羲)
xAI
PhD, University of Maryland, College Park
Email: yechengxi@gmail.com
Research interests: Deep Learning, Computer Vision, Bioinformatics
I am a member of technical staff at xAI. Previously, I was a software engineer at Google DeepMind. My main research focus is ultra-efficient machine learning with applications to large language models (LLMs).
I obtained my PhD in Computer Science from the University of Maryland under the supervision of Prof. Yiannis Aloimonos and Dr. Cornelia Fermüller.
During my career, I have proposed simplified solutions for a few fundamental scientific problems. These include:
How to train artificial neural networks better?
Robustly train neural networks at arbitrary precision and sparsity. Demonstrate a potential approach to train spiking neural nets.
Innovated redundancy reduction in feature pixels and channels to improve the convergence of training deep neural networks. Showed a possible relation with Hubel and Wiesel's center-surround structures and sparse representations.
How to assemble genomes more efficiently?
Reduced the redundancy in the de Bruijn graph representation to achieve a memory-efficient genome assembly algorithm for second-generation sequencing. The work (project name: SparseAssembler) reduced the computational memory requirement of this fundamental task by 90%. This work has been adopted by BGI-Shenzhen to assemble thousands of genomes.
EDUCATION:
2011 - 2019 PhD in Computer Science University of Maryland, College Park
2007 - 2010 MS in Computer Science Zhejiang University
2003 - 2007 BS in Mathematics Sun Yat-sen University
SELECTED PUBLICATIONS
Ye, C., Chu, G., Liu, Y., Zhang, Y., Lew, L., Zhang, L., Sandler, M. and Howard, A. Robust training of neural networks at arbitrary precision and sparsity. The International Conference on Learning Representations (ICLR) 2026.
Ye, C., Zhou, X., McKinney, T., Liu, Y., Zhou, Q., & Zhdanov, F. Exploiting Invariance in Training Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence. 2022.
Ye, C., Evanusa, M., He, H., Mitrokhin, A., Goldstein, T., Yorke, J. A., Fermüller, C., Aloimonos Y. "Network deconvolution." The International Conference on Learning Representations (ICLR) 2020 (spotlight paper).
Smith, J. J., Timoshevskaya, N., Ye, C., et al. (2018). The sea lamprey germline genome provides insights into programmed genome rearrangement and vertebrate evolution. Nature Genetics.
Ye, C., Hill, C. M., Wu, S., Ruan, J., & Ma, Z. S. (2016). DBG2OLC: efficient assembly of large genomes using long erroneous reads of the third generation sequencing technologies. Scientific reports, 6, 31900.
Ye, C., Ma, Z. S., Cannon, C. H., Pop, M., & Douglas, W. Y. (2012). Exploiting sparseness in de novo genome assembly. BMC bioinformatics, 13(6), S1.