Speaker: Dr. Jianzhe Lin, EAG (engineering and architecture group) team, Microsoft
Time: September 17, 2025, 1:00 p.m. – 2:30 p.m.
Room: E265, Discovery Park, UNT
Coordinator: Dr. Yang Zhang
Abstract: Retrieval-Augmented Generation (RAG) has emerged as a powerful paradigm for combining large language models (LLMs) with external knowledge sources, enabling more accurate, context-aware, and trustworthy responses. Building on this foundation, GraphRAG introduces structured knowledge retrieval by leveraging graph representations of information, enhancing reasoning over complex relationships and improving explainability, specifically for summarization and multihop queries. LazyGraphRAG extends this approach by adopting an on-demand, incremental retrieval strategy, balancing efficiency with retrieval depth for large-scale or dynamic knowledge bases. This talk will provide an overview of the core principles behind RAG, explore how graph-based extensions push the boundaries of retrieval and reasoning, and highlight trade-offs between accuracy, scalability, and efficiency across these methods.
Bio of the speaker: Dr. Jianzhe Lin is a Senior Applied Scientist at Microsoft's EAG (engineering and architecture group) team (formerly at Microsoft Research), working on document intelligence projects powered by large language models. His research focuses on Retrieval-Augmented Generation (RAG), Prompt engineering, Vision-Large Language Models (V-LLMs) and large multimodal models, LLM-supervised fine-tuning, and Multi-agent systems.
Previously, Dr. Jianzhe was a Research Assistant Professor at New York University (NYU), where he specialized in multimodal and computer vision. His contributions in computer vision include advances in image classification, segmentation, generation, captioning, video tracking, and detection. In the multi-modality domain, his work addressed key challenges in domain adaptation.
Dr. Jianzhe earned a PhD in Machine Learning and Computer Vision from the University of British Columbia in 2020. He brings over a decade of professional machine learning experience, with over 30 publications (ICLR, CVPR, ACM MM, ICME, etc.) and over 1,800 citations.