Dr. Lingjuan Lv/Lyu
Email: lingjuanlvsmile@gmail.com
Hi, there, I'm Lingjuan, who has 17+ years of experience in the Tech and Engineering Industry, a winner of AI 10 Award, MIT TR35 China Award, IBM PhD Fellow, World’s Top 2% Scientists, IJCAI Early Career Spotlight, 20 best paper awards from top venues.
I'm currently the Global Head of Computer Vision and Privacy-preserving Machine Learning (PPML) at Sony. Our team (globally in US, Zurich & Tokyo) is mainly focusing on the business-driven R&D in frontier and trustworthy AI. We aim to close the gap between research and real industrial applications for various business units. I have spearheaded multiple high-impact projects to facilitate the collaboration between research and product teams. Our current main areas cover:
Ultra-compact, Low-cost, Powerful & Responsible vision, multi-modal & unified foundation model development and deployment.
Ultra-compact, Low-cost, Powerful & Responsible Generative AI for vision .
Techs that can empower the above two areas to make them faster, cheaper, practical & safer, including pipeline optimization (training and inference), distillation, federated learning, privacy, security, IP protection.
Before Sony, I was leading a federated learning team at Ant Group. I received EEE PhD from The University of Melbourne (#1 in AU). Before PhD, I graduated from Chinese Academy of Sciences. I worked at ANU (Level B3 Fellow) , IBM and Cadence Design Systems a decade ago. More than a decade ago, I also had experience and won awards or published papers in humanoid dancing robots, robust watermarking techniques, hardware design, IoT.
I'm open to collaborate with highly motivated students in model compression/acceleration/deployment and LLM-based agents that can plan, reason, and execute complex tasks! Email me with your CV if you're interested in working on high-impact projects with me! My previous students had won many prestigious best paper awards and fellowship or scholar from famous tech companies (IBM, Amazon, Baidu, Bytedance, etc).
Job Openings:
Great News! We have openings for both research and engineering interns in foundation model and generative AI. Topics cover foundation model development and deployment, data synthesis, knowledge distillation, on-device AI (model optimization/compression/profiling). Welcome to apply online, or drop me an email!
NeurIPS 2025 Spotlight!
🏆 MIT TR35 China Award!
Multiple papers accepted by ICML'25!
Multiple workshops got accepted, welcome to contribute your works to ICCV’25 workshop on trustworthy foundation model; ICML'25 The Impact of Memorization on Trustworthy Foundation Models; FedGenAI-IJCAI’25!
Multiple papers accepted by CVPR'25, 1 Highlight! Check out our Argus-VFM model with only 100M parameter, but can support 17 practical vision tasks (object detection, pose estimation, instance/semantic/panoptic segmentation, depth estimation, surface normal estimation, human parsing, object boundary detection, saliency detection, anomaly detection, image classification, OCR, deraining, denoising, super-resolution, gaze detection.
WWW'25 Oral!
2 workshops were accepted by CVPR’25! Welcome to submit to The 4th Workshop on Federated Learning for Computer Vision; CVPR’25 Test-time Scaling for Computer Vision.
🏆 AI 10 Award!
Invited talk at NIST.
Organizer @ECCV'24 Privacy for Vision & Imaging, welcome to join!
Committee @ECCV’24 The First Dataset Distillation Challenge, welcome to contribute!
🚀 We made it possible to train a diffusion transformer for $1,890 only, the cheapest diffusion model training method by far! Check out our CVPR'25 work Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget and our publicly available Github (1.5K+ stars) and Checkpoints,
🚀 Our Industry-scale, Practical and Vision-Centric Federated Learning Platform was accepted by ICML'24 and publicly released here. It can support 15+ computer vision tasks, including classification, object detection, segmentation, pose estimation, and more! It also facilitates federated multiple-task learning and supports FL for FM, split learning functionalities! Welcome to star and contribute!
5 papers accepted by NeurIPS'24, 1 Spotlight!
🏆 Best paper award, FL@FM- IJCAI‘24
3 papers accepted by ECCV'24
Keynote speaker @ Fedvision-CVPR'24
5 papers accepted by ICML'24
Chair of NeurIPS’24 Datasets and Benchmarks track
4 papers accepted by ICLR'24, 1 selected as Oral (top 1.2%)
🏆 My blog was chosen as one of the Best Reads of 2023 by Towards Data Science
Keynote speaker @ FL-NeurIPS'23
World's Top 2% Scientists: 2023, 2024
5 papers @ NeurIPS 2023, 1 Spotlight
🏆 Area Chair Award, ACL’23
🏆 Best Student Paper Award, FL-IJCAI’23
🏆 Best Industry Paper Award, FL4Data-Mining’ KDD23
IJCAI 2023 Early Career Spotlight
5 papers @ ICML 2023
🏆 Best Paper Award. WWW’23 3rd International Workshop on Deep Learning for the Web of Things
5 papers @ ICLR 2023, one Oral (top 1.5%)
🏆 IEEE Outstanding Leadership Award, 2022
🏆 Best Paper Runner-up Award @CIKM’22
EMNLP’22 Oral paper
Six papers @ NeurIPS 2022
🏆 Best paper award, FedGraph, CIKM’22
Long paper @COLING’22
🏆 Outstanding Paper Award, ICML’22
Spotlight Paper @ICML’22
Two papers @ Nature Communications
IJCAI'22 "Oral" paper
Keynote speaker at FL-AAAI’22
🏆 Best Student Paper Award, FL-AAAI’22
🏆 Best paper award, AAAI’22 AI for Transportation Workshop
🏆 Most Popular Award, NTU College of Engineering Video Competition, 2022
Prize Winner of Ant Security Tech Ambassador, 2021
🏆 Best paper award, FL-IJCAI’20
🏆 IBM Ph.D. Fellowship, 2017