Speaker: Zhengzhong Tu, Assistant Professor at Texas A&M University
Time: September 24, 2025, 1:00 p.m. – 2:30 p.m.
Room: E265, Discovery Park, UNT
Coordinator: Dr. Kewei Sha
Abstract: Vehicle-to-Everything (V2X) technologies have emerged as a promising direction to improve transportation safety and mobility. Traditional ML-based V2X approaches rely on exchanging rigid, predefined data packets, which limits the adaptability and scalability in complex real-world environments. In this talk, I will first introduce our earlier work on transformer-based modeling of V2X communication latency (V2X-ViT, ECCV 2022), which demonstrated the potential of using deep learning plus transformers to capture spatial–temporal dynamics in vehicular networks. Building on our recent work, LangCoop (CVPR 2025 MEIS), I will then present our vision for Agentic V2X systems: moving beyond machine-learning-driven data exchange toward vehicles as intelligent agents with reasoning, planning, and collaboration capabilities. This vision unfolds across three critical dimensions: (1) Cognitive Capabilities – enhancing spatial understanding, temporal reasoning, and tool use for more adaptive V2X agents; (2) Connectivity, Interoperability, and Social Context – from natural-language-based collaboration in LangCoop, to shared knowledge pooling in V2X-UniPool, and multi-modal cooperation with UAVs in AirV2X; (3) Safety and Ethicality – addressing the risks of language-based communication for connected automated vehicles. Together, these efforts chart a path toward Agentic V2X systems that go beyond structured communication to achieve intelligent, trustworthy, and human-centric collaborative autonomy.
Bio of the speaker: Dr. Zhengzhong Tu has been an Assistant Professor of Computer Science and Engineering at Texas A&M University since 2024/9. He received his Ph.D. degree from the University of Texas at Austin, TX, USA, in 2022, advised by Cockrell Family Regents Endowed Chair Professor Alan Bovik. Before joining Texas A&M, Dr. Tu worked as an AI researcher at Google Research, where he mainly worked on generative foundation models for on-device applications. Dr. Tu has published in IEEE TPAMI, IEEE TIP, NeurIPS, ICLR, CVPR, ECCV, ICCV, ICRA, WACV, and CoRL, among others. He has co-organized the 2nd/3rd Workshop on LLVM-AD at ITSC 2024/WACV 2025, the 1st Workshop on WDFM-AD at CVPR 2025, the 2nd MetaFood Workshop at CVPR 2024, and the 1st E2E 3D workshop at ICCV 2025. He has received the 1st place winning solution award for the AI4Streaming 2024 Challenge at CVPR 2024 and the 1st place in NTIRE 2025 Short-form UGC VQAE Challenge. He is an Associate Editor of IEEE TIP and Co-Chair of the SOGAI special group in VQEG. He is a recipient of the CVPR 2022 Best Paper Finalist, CVPR 2025 MEIS Workshop Best Paper Award, Google Research Scholar Award 2025, headlining in Google Research Annual Blog, and featuring in Google I/O media outlets.