COGS 2025 is co-organized by researchers from Meta and The University of Toronto.
Forrest Iandola completed a PhD in EECS at UC Berkeley, where his research focused on squeezing deep neural networks onto small devices. As part of his dissertation research, he developed the energy-efficient SqueezeNet neural network. His advances in deep learning led to the founding of DeepScale, which was acquired by Tesla in 2019. He is currently an AI Research Scientist at Meta, where his research includes EfficientSAM and MobileLLM. In 2023 and 2024, Forrest was one of the organizers of the CVPR LOVEU workshop and competition for AI video editing.
Zechun Liu is a senior research scientist at Meta Reality Labs. She is the first author of several well-received papers in efficient learning, including MobileLLM, LLM-QAT, and SpinQuant. She has published more than 30 papers in prestigious conferences and journals such as ICCV, CVPR, NeurIPS, ICML and IJCV, with a cumulative citation count of over 4,600. She has also organized several high-profile workshops at ICCV'23 and CVPR'21. She played a key role in these workshops, coordinating tasks such as speaker invitation, review process, paper submission, and workshop hosting.
Cheng Chang is a Research Engineer at Meta Reality Labs Research. He earned his PhD in Computer Vision in 2005 and has since worked in the industry on video processing, compression, and streaming technologies. Currently, he focuses on 3D data compression, streaming, and visualization.
Karthik Ganesan is a postdoctoral fellow at the University of Toronto, working with Prof. Andreas Moshovos. Karthik earned his PhD from the University of Toronto in 2024, where he worked on enabling secure and private machine learning on edge devices. His post-doc work looks at efficient LLM inference on edge devices.
Kareem Ibrahim is a PhD candidate in Electrical and Computer Engineering at the University of Toronto and teaches in the Department of Computer and Mathematical Sciences. His research focuses on enabling efficient machine learning on edge devices, with work spanning model compression, training acceleration, and real-time 3D neural reconstruction. He also investigates evaluation methodologies and metrics to assess the trade-offs between efficiency, quality, and performance in acceleration techniques.
Enrique Torres Sanchez is a PhD candidate in Electrical and Computer Engineering at the University of Toronto. His research focuses on machine learning acceleration across all hardware categories. His work encompasses model compression, quantization, training acceleration and NERF acceleration, but focuses on bitlength learning and on-the-fly dynamic datatype adaptation.
Hugo Tessier is a postdoctoral fellow at the University of Toronto, where he works on quantization for Gaussian splatting. He earned his PhD in Computer Vision in 2023 at IMT Atlantique, Brest, France, on the subject of deep neural networks compression, and especially pruning. He also completed a post-doc at McGill university, where he worked on simulated annealing.
Miloš Nikolić is the founder and CTO of ByteShape, where he develops data representation learning methods to improve the energy efficiency and performance of machine learning systems. He recently completed his PhD at the University of Toronto under Professor Andreas Moshovos, focusing on low-overhead techniques for optimizing neural network data value representation. His work reduces model footprint by determining minimal bitlengths for training and inference, helping users balance runtime efficiency with output quality. Miloš has collaborated with Professor Yoshua Bengio at MILA, was a postgraduate affiliate at the Vector Institute, and was named a 2025 ML & Systems Rising Star by MLCommons.
Prof. Andreas Moshovos is a full professor in the Edward S. Rogers Sr. Department of Electrical and Computer Engineering, where he holds the Dusan and Anne Miklas Chair in Engineering Design. He has served as the general chair for two top-tier conferences in Computer architecture: 1. The ACM International Conference on Parallel architectures and compilation techniques (PACT) 2008 and 2. The International Symposium on Computer Architecture (ISCA) 2017. Prof. Moshovos also served as the Director for the NSERC COHESA program, a Canada-wide research initiative involving 20 researchers and 7 companies to advance the state-of-the-art in machine learning software and hardware research.