Haibin Lin | 林海滨

Scholar | Linkedin | Twitter


Haibin works on LLM and AGI infrastructure at Bytedance, focusing on optimizing training performance for LLMs & multimodal models (with more than 10k GPUs). Prior to the LLM era, Haibin worked on collective communication libraries (ByteCCL), and GPU-based recommendation model systems for Douyin & Tiktok in a team lead by Yibo Zhu. Before he joined Bytedance, he was at Amazon Web Services working on ML framework core (Apache MXNet), and large scale NLP model training, with a team led by Mu Li and Alex Smola. He finished his M.S. in Computer Science at Carnegie Mellon University Database Group, advised by Andy Pavlo. Haibin obtained his Bachelor's degree in Computer Science at Hong Kong University and Shanghai Jiao Tong University jointly.

Recently we are also working on veScale, a PyTorch native auto-parallelism framework (collaborations are welcome!)

Softwares (Python, C++, CUDA)

Papers

Large-scale distributed training systems & HPC

ML frameworks and toolkits

Deep learning

Database systems

Distributed optimization algorithms

Patents

Awards & Services