Presenter:
Sewon Min, Assistant Professor, UCB
Title: TBD
Bio:
Sewon Min is an Assistant Professor in EECS at UC Berkeley, affiliated with Berkeley AI Research (BAIR), and a Research Scientist at the Allen Institute for AI. Her research lies at the intersection of natural language processing and machine learning, with a focus on large language models (LLMs). She studies the science of LLMs and develops new models and training methods for better performance, flexibility, and adaptability, such as retrieval-based LMs, mixture-of-experts, and modular systems. She also studies LLMs for information-seeking, factuality, privacy, and mathematical reasoning. She has organized tutorials and workshops at major conferences (ACL, EMNLP, NAACL, NeurIPS, ICLR), served as a Senior Area Chair, and received honors including best paper and dissertation awards (including ACM Dissertation Award Runner-up), a J.P. Morgan Fellowship, and EECS Rising Stars. She earned her Ph.D. from the University of Washington and has held research roles at Meta AI, Google, and Salesforce.
Presenter:
Jiajun Wu, Assistant Professor, Stanford
Title: TBD
Bio:
He received his B.Eng. from Tsinghua University and his Ph.D. in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology (MIT). He was a Visiting Faculty Researcher at Google before joining Stanford University, where he is an Assistant Professor of Computer Science (by courtesy, Psychology). His research focuses on machine learning, computer vision, and cognitive science. He has received multiple honors, including the NSF CAREER Award, ACM Doctoral Dissertation Honorable Mention, MIT’s George M. Sprowls PhD Thesis Award, and faculty research awards from Google, Amazon, Meta, Samsung, and J.P. Morgan.
Presenter:
Mengdi Wang, Professor, Princeton University
Title: TBD
Bio:
Mengdi Wang is a professor at the Department of Electrical and Computer Engineering and Center for Statistics and Machine Learning at Princeton University. She is also affiliated with the Department of Computer Science, Princeton’s ML Theory Group. She was a visiting research scientist at DeepMind, IAS and Simons Institute on Theoretical Computer Science. Her research focuses on machine learning, reinforcement learning, generative AI, AI for science and intelligence system applications . Mengdi received her PhD in Electrical Engineering and Computer Science from Massachusetts Institute of Technology in 2013, where she was affiliated with the Laboratory for Information and Decision Systems and advised by Dimitri P. Bertsekas.
Presenter:
Beidi Chen, Assistant Professor, CMU
Title: TBD
Bio:
Beidi Chen is an Assistant Professor at Carnegie Mellon University. Previously, she was a visiting Research Scientist at FAIR. Before that, she was a postdoctoral scholar at Stanford University. She received her Ph.D. from Rice University. Her research focuses on efficient AI; specifically, she designs and optimizes algorithms on current hardware to accelerate large machine learning systems. Her work has won best paper runner-up at ICML 2022 and she was selected as a Rising Star in EECS by MIT and UIUC.
Presenter:
Song Han, Associate Professor, MIT
Title: TBD
Bio:
Song Han is an Associate Professor in Electrical Engineering and Computer Science at MIT. He received his Ph.D. in Computer Science from Stanford University, where he pioneered deep learning model compression techniques—including pruning and quantization—that have become foundational in efficient AI. His research spans efficient neural networks, hardware–algorithm co-design, scalable training, and real-world deployment of edge and on-device intelligence. His recognitions include the NSF CAREER Award, the ONR Young Investigator Award, two Google Faculty Research Awards, and inclusion in the MIT TR35 list.
Presenter:
Soumalya Sarkar, Senior Principal Scientist, RTRC
Title: TBD
Bio:
He is Senior Principal Scientist in the AI discipline at Raytheon Technologies Research Center (RTRC). He earned his Ph.D. in Mechanical Engineering (with a focus on machine learning in electromechanical systems) from Penn State University in 2015. He leads projects in physics-informed AI, multi-fidelity modeling, simulation acceleration, knowledge graphs, and black-box optimization in aerospace/engineering domains. His honors include the 2021 Technical Excellence Award at RTRC, multiple Outstanding Achievement Awards, and selection to the National Academy of Engineering’s US Frontiers of Engineering symposium.