Demos and Videos

Fully Embedding Fast Convolutional Networks on Pixel Processor Arrays

ECCV_WITH_3RD_PERSON_DIGITS.mp4

Our paper "Fully Embedding Fast Convolutional Networks on Pixel Processor Arrays" has been accepted in ECCV 2020.

We demonstrate for the first time complete inference of a CNN upon the focal plane of a sensor. The key idea behind our approach is storing network weights "in-pixel", allowing the parallel analogue computation of the SCAMP PPA to be fully utilized.

This current work considers a baseline task for digit classification at over 3000+ frames per second. We are moving into more sophisticated tasks and our approach is general enough to easily transfer to future Pixel Processor Array devices.

Arxiv version: https://arxiv.org/abs/2004.12525

High-speed Light-weight CNN Inference via Strided Convolutions on a Pixel Processor Array

Performance, storage, and power consumption are three major factors that restrict the use of machine learning algorithms on embedded systems. However, new hardware architectures designed with visual computation in mind may hold the key to solving these bottlenecks. This work makes use of a novel visual device: the pixel processor array (PPA), to embed a convolutional neural network (CNN) onto the focal plane. We present a new high-speed implementation of strided convolutions using binary weights for the CNN on PPA devices, allowing all multiplications to be replaced by more efficient addition/subtraction operations. Image convolutions, ReLU activation functions, max-pooling and a fully-connected layer are all performed directly on the PPA’s imaging plane, exploiting its massive parallel computing capabilities. We demonstrate CNN inference across 4 different applications, running between 2,000 and 17,500 fps with power consumption lower than 1.5W. These tasks include identifying 8 classes of plankton, hand gesture classification and digit recognition.



Our paper "A Camera That CNNs: Towards Embedded Neural Networks on Pixel Processor Arrays " has been accepted as oral for ICCV 2019 and as a conference demo.

This work takes a first step embeddeding Neural Networks directly onto the focal plane of a sensor.

Arxiv version: https://arxiv.org/abs/1909.05647

Visual Target Tracking with a parallel visual processor

Quadrotor tracking visual target using SCAMP. The SCAMP returns just the bounding box (x,y,height,width) of the target location in the image plane. The onboard controller fuses this information with the IMU and height data to calculate the distance to the target.

Conference Video "Tracking control of a UAV with a parallel visual processor"

Visual Odometry with a Pixel Processor Array

Details and link to paper to follow

Agile Reactive Navigation for A Non-Holonomic Mobile Robot Using A Pixel Processor Array

This work presents an agile reactive navigation strategy for driving a non-holonomic ground vehicle around a preset course of gates in a cluttered environment using a low-cost processor array sensor. This enables machine vision tasks to be performed directly upon the sensor’s image plane, rather than using a separate general-purpose computer. We demonstrate a small ground vehicle running through or avoiding multiple gates at high speed using minimal computational resources. To achieve this, target tracking algorithms are developed for the Pixel Processing Array and captured images are then processed directly on the vision sensor acquiring target information for controlling the ground vehicle. The algorithm can run at up to 2000 fps outdoors and 200fps at indoor illumination levels. Conducting image processing at the sensor level avoids the bottleneck of image transfer encountered in conventional sensors. The real-time performance of on-board image processing and robustness is validated through experiments. Experimental results demonstrate that the algorithm’s ability to enable a ground vehicle to navigate at an average speed of 2.20 m/s for passing through multiple gates and 3.88 m/s for a ‘slalom’ task in an environment featuring significant visual clutter.

Feature Extraction

scamp_feature_extraction.wmv
scamp_feature_extraction2.wmv

This work demonstrates high-speed low-power feature extraction implemented on a portable vision system based on the SCAMP-5 vision chip. This embedded system executes a parallelized FAST16 corner detection algorithm on the vision chip’s programmable on-focal-plane processor array to detect the corner points in the current image frame. The coordinates of these corner points are then extracted using the vision chip’s event address readout infrastructure. These coordinates, as sparse data, can then be output through any of the IO buses within the micro-controller in the vision system at low latency. The USB-powered (400mA) system is capable of outputting 250 features at 2300 frames per second (FPS) in ideal lighting conditions, while 1000 FPS can be achieved in an indoor environment. The system can be applied to the real-time control of agile robots and miniature aerial vehicles.

High Dynamic Range

Varying light levels across the image can make it difficult to see detail throughout the scene. SCAMP is capable of producing an HDR image prior to processing other vision algorithms to help deal with difficult lighting conditions.

The two images on the left show two extremes, first where the bright areas are properly exposed, the second where the darker areas are exposed.

Edge Detection

Visual image captured by SCAMP

Edge detection returned by SCAMP

Info about edge detection to follow

Image Scaling and Rotation

Details to follow

Tracking at 100,000 Frames Per Second

The SCAMP chip has been exploited to conduct real-time image processing operations at 100,000fps, locating a closed-shape object from amongst clutter. Further detail on this work is presented here: http://ieeexplore.ieee.org/abstract/document/6578654