Embedded smart camera architectures with on-sensor feature extraction

Abstract. A smart camera is a standalone machine vision system composed of image capture hardware and a data processor. The data processor allows the camera to extract application-specific information from the acquired images on the fly, and to use such information to make local decisions or transmit it to other devices in a larger system. Smart cameras operating in the visible and infrared spectral bands have diverse applications in border surveillance, assisted driving, noninvasive tumor detection, preventive maintenance of industrial equipment, spectrography, waste sorting, food quality control, forest fire detection, search-and-rescue operations, pedestrian detection, and user identification, among many others.

Reconfigurable devices, such as field-programmable gate arrays, have been recently integrated into smart camera architectures as processing elements to satisfy the high computational requirements of real-time video stream processing. The large number of simple logic elements available in these devices makes them a good match for the fine-grained data parallelism featured by most image and video processing algorithms. However, the power efficiency and performance of these architectures is limited by one fundamental property: Since image acquisition and processing take place in separate hardware devices, massive amounts of raw image data are sensed, stored, digitized, and finally transferred to the processing element, regardless of whether all the data is needed by the algorithm. Because pixel values are transferred on a single data link, the ability of the processor to exploit the spatial parallelism available in the image is severely limited. In contrast, biological organisms attain high efficiency and performance by first operating locally on the data at the image sensor in parallel, and then transferring the resulting information to a central-processing system that performs higher cognitive functions.

Inspired by these biological systems, in this proposal we aim to develop a heterogeneous smart camera processor architecture that distributes computation between an intelligent image sensor and a high-performance computing node, where the former will be capable of acquiring image data and locally computing image features, and the latter will execute more complex, application-dependent algorithms on the data. Because physical limitations in the design of the image sensor preclude digitization at each pixel, local data will be processed by analog circuits, while the central processing node will be implemented using high-performance reconfigurable digital hardware. We will approach this research on the following principal fronts:

1. We will analyze and design algorithms that exploit massively parallel simple pixel operations and adapt them to the known limitations of analog circuits, such as restricted arithmetic precision, limited linearity, signal offsets, and parameter variation due to device mismatch.

2. We will design and fabricate a flexible smart image sensor featuring analog circuits capable of acquiring image data, and computing a set of local operations to support an important class of image processing algorithms.

3. We will design a central computing node capable of interfacing with the smart imager and processing its output on configurable specialized functional units to achieve high performance.

4. We will propose a programming model to help developers reason about their applications and efficiently map them onto the heterogeneous architecture.

As a result of the project, we aim to build an experimental prototype of the architecture that can compute an important class of image processing and machine vision algorithms. Because of its highly-parallel processing capabilities, we expect our architecture to feature high computational performance with low power dissipation, while retaining a significant flexibility to be used in a diverse range of smart camera applications.

Publications.

  1. “A digital architecture for striping noise compensation in push-broom hyperspectral cameras,” W. Valenzuela, M. Figueroa, J. E. Pezoa, and P. Meza. In Proc. SPIE Optical Engineering + Applications, San Diego Convention Center, San Diego, CA, USA, 9-13 August, 2015.
  2. “A custom hardware classifier for bruised apple detection in hyperspectral images,” J. Cárdenas, M. Figueroa, and J. E. Pezoa. In Proc. SPIE Optical Engineering + Applications, San Diego Convention Center, San Diego, CA, USA, 9-13 August, 2015.

Funding agency: FONDECYT.

Program: FONDECYT Regular 2018

Grant number: 1180995.

Funding period: April 2018 — March 2021.

PI: Miguel E. Figueroa.

Co-PI: Jorge E. Pezoa.