Low-Bit Quantized Neural Networks 

October 2, 2023 | Paris, France | Hybrid
Virtual Attendance Link:  
https://fb.zoom.us/j/94117000750?pwd=VWtWemZpTlpUdStYdDI0MzFOVDVrUT09 

Welcome to join the ICCV 2023 Workshop on Low-Bit Quantized Neural Networks! Quantization has been the staple of efficient neural network deployment methods for decades. Up until recently, quantizing networks to less than 8 bits was rare due to the limitations in technology. However, recent years have witnessed two opposing trends: the increasing capacity and computational complexity of SoTA neural network models on the one hand, and on the other, advancements in low-bit quantization methods, pushing them into practical levels of performance in multiple domains. As a result, both the need for, and the feasibility of low-bit quantization in neural networks have increased. We see this reflected in the increasing number of low-bit quantization publications in major conferences and journals, as well as the increasing investment in the topic from large industry players. Low-Bit Quantized Neural Network (LBQNN) has emerged as an area of its own, with its distinctive methods, and diverse, growing community. This workshop aims to cover both the development and novel methodologies for LBQNNs and their application to computer vision, bringing together a diverse group of researchers working in several related areas. 

Invited Speakers and Panelists 

Director Of Engineering

Qualcomm

Vice-President

HKUST

Associate Professor

MIT

Technical Lead Manager

Meta

CEO

Useful Sensors

R&D manager

Microsoft

Organizers

Research Scientist

Meta

Research Scientist

Meta

Research Scientist

Meta

Research Scientist

Samsung

Associate Professor

QMUL

Program Committee 

Xijie Huang

Shih-Yang Liu

Chaofan Tao

Yongfan Chen

Jing Liu

Yanjing Li

Ioannis Maniadis Metaxas

Brais Martinez

Enrique Sanchez-Lozano

Yassine Ouali