Yuzhu Li
PhD candidate, Electrical & Computer Engineering Department, UCLA
PhD candidate, Electrical & Computer Engineering Department, UCLA
I am a PhD Candidate at UCLA Bio- and Nano-photonics Lab supervised by Professor Aydogan Ozcan. I received Master of Science in June 2022 at UCLA. I received Bachelor of Engineering (BEng) degree in June 2020 at Zhejiang University, China.
My current research focuses on leveraging advanced AI and deep learning tools to perform better optical sensing and imaging and exploring AI's transformative potential to enhance biomedical applications through improved optical imaging techniques.
News
2024/08 I am so excited to receive the Best Oral Presentation Award from the SPIE Optics + Photonics Emerging Topics in Artificial Intelligence (ETAI) 2024 conference, San Diego, California, in August.
2024/06 I am so excited to receive Amazon Fellowship 2024.
2024/02 Our new research article, "Virtual histological staining of unlabeled autopsy tissuey", was published on Nature Communications!
2023/12 Our research on "Rapid, stain-free quantification of viral plaques" was selected as "Optics in 2023" in the special issue of Optics & Photonics News (OPN).
2023/10 I am so excited to receive the “Emil Wolf Outstanding Student Paper Award” at the Optica Frontiers in Optics (FIO) Conference held in Tacoma, Washington, in October.
2022/06 Our research article, "Rapid and stain-free quantification of viral plaque via lens-free holography and deep learning" was published on Nature Biomedical Engineering!
Selected publications
Virtual histological staining of unlabeled autopsy tissue
Nature Communications
Traditional histochemical staining of post-mortem samples often confronts inferior staining quality due to autolysis caused by delayed fixation of cadaver tissue, and such chemical staining procedures covering large tissue areas demand substantial labor, cost and time. Here, we demonstrate virtual staining of autopsy tissue using a trained neural network to rapidly transform autofluorescence images of label-free autopsy tissue sections into brightfield equivalent images, matching hematoxylin and eosin (H&E) stained versions of the same samples. The trained model can effectively accentuate nuclear, cytoplasmic and extracellular features in new autopsy tissue samples that experienced severe autolysis, such as COVID-19 samples never seen before, where the traditional histochemical staining fails to provide consistent staining quality. This virtual autopsy staining technique provides a rapid and resource-efficient solution to generate artifact-free H&E stains despite severe autolysis and cell death, also reducing labor, cost and infrastructure requirements associated with the standard histochemical staining.
Rapid and stain-free quantification of viral plaque via lens-free holography and deep learning
Nature Biomedical Engineering
A plaque assay—the gold-standard method for measuring the concentration of replication-competent lytic virions—requires staining and usually more than 48 h of runtime. Here we show that lens-free holographic imaging and deep learning can be combined to expedite and automate the assay. The compact imaging device captures phase information label-free at a rate of approximately 0.32 gigapixels per hour per well, covers an area of about 30 × 30 mm2 and a 10-fold larger dynamic range of virus concentration than standard assays, and quantifies the infected area and the number of plaque-forming units. For the vesicular stomatitis virus, the automated plaque assay detected the first cell-lysing events caused by viral replication as early as 5 h after incubation, and in less than 20 h it detected plaque-forming units at rates higher than 90% at 100% specificity. Furthermore, it reduced the incubation time of the herpes simplex virus type 1 by about 48 h and that of the encephalomyocarditis virus by about 20 h. The stain-free assay should be amenable for use in virology research, vaccine development and clinical diagnosis.
ACS Photonics
Early detection and identification of pathogenic bacteria such as Escherichia coli (E. coli) is an essential task for public health. The conventional culture-based methods for bacterial colony detection usually take ≥24 h to get the final readout. Here, we demonstrate a bacterial colony-forming-unit (CFU) detection system exploiting a thin-film-transistor (TFT)-based image sensor array that saves ∼12 h compared to the Environmental Protection Agency (EPA)-approved methods. To demonstrate the efficacy of this CFU detection system, a lens-free imaging modality was built using the TFT image sensor with a sample field-of-view of ∼7 cm2. Time-lapse images of bacterial colonies cultured on chromogenic agar plates were automatically collected at 5 min intervals. Two deep neural networks were used to detect and count the growing colonies and identify their species. When blindly tested with 265 colonies of E. coli and other coliform bacteria (i.e., Citrobacter and Klebsiella pneumoniae), our system reached an average CFU detection rate of 97.3% at 9 h of incubation and an average recovery rate of 91.6% at ∼12 h. This TFT-based sensor can be applied to various microbiological detection methods. Due to the large scalability, ultra large field-of-view, and low cost of the TFT-based image sensors, this platform can be integrated with each agar plate to be tested and disposed of after the automated CFU count. The imaging field-of-view of this platform can be cost-effectively increased to >100 cm2 to provide a massive throughput for CFU detection using, e.g., roll-to-roll manufacturing of TFTs, as used in the flexible display industry.