I am a Research Scientist at Meta Reality Labs. I received my Ph.D. from the University of California, San Diego, where I was advised by Prof. Manmohan Chandraker. My research focuses on developing large 3D foundation models that understand and synthesize the digital world, empowering AI to perceive, reason, and act within complex, dynamic environments.
My earlier work centered on inverse rendering—reconstructing geometry, material reflectance, and lighting—to achieve realistic, controllable rendering for AR/VR applications like object insertion and relighting. In recent years, I have advanced scalable 3D foundation models for the feed-forward, interactive creation of digital twins. Currently, I am particularly interested in leveraging large-scale video pretraining to tackle challenging 3D vision problems, enabling models to learn directly from the richness and diversity of real-world data.
Internships at Meta: Please feel free to send me your CV and research interest to my personal email (lizhengqin2012-at-gmail-dot-com) or company email (zhl-at-meta-dot-com).