Submission Deadline: August 22, 2025 September 11, 2025
Review Period & Oral Decisions: September12 – September 17, 2025
Notification of Acceptance: September 22, 2025
Camera-Ready Deadline: October 15, 2025
Workshop Date: December 6 or 7, 2025
All Dates are AoE
Recent advances in areas such as computer vision have showcased the power of deep learning models, but these models are still constrained by the nature of the input sensor data. Traditional signal processing pipelines such as Image Signal Processing (ISP) are often designed with human interpretation or conventional metrics in mind, potentially discarding raw sensor information that could better serve downstream machine-centric tasks.
This workshop aims to explore the joint optimization of sensors and deep learning models. We focus on three key areas:
(1) Sensor Optimization, including low-bit data capture (e.g., 4-bit, 6-bit, 8-bit) and learnable sensor layouts co-optimized with network parameters.
(2) Reimagining sensor front ends, shifting from fixed analog-to-digital pipelines to learnable, task-adaptive digitization or even direct signal-to-task pipelines.
(3) Task-Driven Sensor Design, where data capturing is guided by the needs of downstream models rather than traditional goals such as human aesthetics, using novel sensor configurations or adaptive sensing strategies.
This workshop will foster discussions on end-to-end optimization of sensor data acquisition and processing, with the goal of enabling more robust, efficient, and scalable sensor systems. As NeurIPS 2025 brings increasing attention to real-world deployment, particularly on mobile, embedded, and autonomous platforms, this workshop addresses timely challenges at the intersection of sensor design, signal processing, and machine learning.
This workshop will bring together experts from academia and industry to discuss and explore the following key topics:
Sensor Optimization for e.g. Vision Tasks: Understanding how sensor design, such as pixel layout, bit depth, and filter configurations in Computer Vision, can be optimized for deep learning models, and how joint optimization of sensor and model parameters can lead to more efficient sensor systems.
Reimagining the Sensor Front Ends: Investigating the possibility of replacing traditional analog-to-digital pipelines with learnable or optimized pipelines that can better align with deep learning models. This could involve training end-to-end systems that bypass traditional sensor data processing steps.
Task-Driven Sensor Design: Designing sensors specifically optimized for deep learning models, focusing on maximizing the amount of information captured for the model’s task rather than optimizing for traditional goals such as human visual perception. This could include innovative use of adaptive sensing strategies and sensor layouts.
Joint Optimization of Sensors and Models: Exploring the synergies between sensor hardware and deep learning models through joint optimization. This involves optimizing both the sensor layout and the model’s architecture to work seamlessly together and achieve better performance for specific tasks.
Quantization of RAW Data: Investigating the role of quantization in making sensor data more manageable, such as reducing bit-depth for efficient storage and processing, while maintaining model performance.
Benchmarks for Sensor-Model Systems: Introducing and discussing benchmarks that evaluate the performance of sensor-model systems, including new datasets and evaluation metrics tailored to task-driven sensor designs and optimized analog-to-digital pipelines.
Generalization Across Real-World Scenarios: Focusing on how optimized sensor-model systems can generalize across diverse real-world environments, and sensor types. The goal is to build systems that are robust and adaptable in various settings.
Negative Results: Quite often, new proposed joint sensor and DL model-based approaches might not perform as well as for example the established methods with RGB images in common vision datasets. However, discussing these results is paramount as this helps the community understand what did not work and helps them build upon these failed methods. Thus, we would also welcome negative results to be discussed and published at the proposed workshop.
We accept two categories of submissions:
Full Papers
Papers in this track will be published in the NeurIPS 2025 Workshop Proceedings (PMLR), and must be up to 9 pages excluding references and supplementary material. Accepted papers will be presented as talks or spotlight posters at the workshop.
Extended Abstracts
Paper in this track will not be published in the NeurIPS 2025 Workshop Proceedings (non-archival). In this track we especially invite papers with early-stage ideas or ongoing work. Paper should be up to 4 pages excluding references and supplementary material. Accepted papers will be presented as short talks or posters at the workshop.
For all tracks, the accepted papers will be presented in-person. At least one author for each accepted paper should plan to attend the workshop to present a poster.
All submissions must follow the NeurIPS 2025 LaTeX style file and will undergo double-blind peer review via OpenReview. Supplementary materials are optional and do not count toward the page limit
The online submissions for both categories, Full Papers and Extended Abstracts, is OpenReview and available at: https://openreview.net/group?id=NeurIPS.cc/2025/Workshop/L2S.
The submission portal opens on 15th July, 2025.
We look forward to receiving your submissions!