Demos and Videos

Weighted Node Mapping and Localisation on a Pixel Processor Array, ICRA2021, Xi'an, China

Abstract— This paper implements and demonstrates visual route mapping and localisation upon a Pixel Processor Array (PPA). The PPA sensor comprises of an array of Processing Elements (PEs), each of which can capture and process visual information directly. This provides significant parallel processing power allowing novel ways in which information can be processed on-sensor. Our method predicts the correct node within a topological map generated from an image sequence by measuring image similarities, spatial coherence, and exploiting the parallel nature of the PPA. Our implementation runs at +300Hz on large public datasets with +2K locations requiring 2.5W at 500 GOPS/W. We compare vs traditionally implemented methods demonstrating better F-1 performance even on simulation. As far as we are aware, we present the first on-sensor mapping and localisation system running entirely on sensor.

paper available: https://www.researchgate.net/publication/350187131_Weighted_Node_Mapping_and_Localisation_on_a_Pixel_Processor_Array

Pixel Processor Arrays to Bridging Perception and Action in Agile Robots - WACV2021 talk

Abstract: This talk will discuss recent advances in the development of visual architectures and their algorithms towards a step change in the direction of agile robotics. Current visual architectures often used in Robotic systems were not designed for vision nor action. They were designed for video recording or graphical processing. This hinders systems requiring low lag, low energy consumption and importantly force visual algorithms to process the world in ways that prevent effective coding for actions. In this talk I will describe work towards new architectures, specifically pixel processor arrays such as the SCAMP, that allow massive parallelism and focal-plane processing with reduced energy consumption. Examples include how these new architectures allow to perform visual computation onboard agile vehicles for tasks that involve visual odometry, recognition and deep network inference.

Fully Embedding Fast Convolutional Networks on Pixel Processor Arrays

Our paper "Fully Embedding Fast Convolutional Networks on Pixel Processor Arrays" has been accepted in ECCV 2020.

We demonstrate for the first time complete inference of a CNN upon the focal plane of a sensor. The key idea behind our approach is storing network weights "in-pixel", allowing the parallel analogue computation of the SCAMP PPA to be fully utilized.

This current work considers a baseline task for digit classification at over 3000+ frames per second. We are moving into more sophisticated tasks and our approach is general enough to easily transfer to future Pixel Processor Array devices.

Arxiv version: https://arxiv.org/abs/2004.12525

Towards Drone Racing with a Pixel Processor Array

IMAV2019

Drone racing is an interesting scenario for an agile MAV due to the need for rapid response and high accelerations. In this paper we use a Pixel Processor Array (PPA) demonstrating the marriage of perception and compute capabilities on the same device. A Pixel Processor Array (PPA) consists of a parallel array of processing elements, each of which features light capture, processing and storage capabilities allowing for various image processing tasks to be efficiently performed directly on the sensor itself. This paper presents the use of a PPA for gate detection and location in a typical drone racing scenario. Conventional sensing techniques typically require significant processing overheads on separate hardware, resulting in lower frame rates and higher power consumption than is possible to achieve with a PPA. The results given here demonstrate gate detection and location with real-time planning to account for uncertainty in the gate location. Additionally, the PPA only needs to output specific information such as the estimated target location variables, rather than having to output entire images. This significantly reduces the bandwidth required for communication between the sensor and on-board computer, further enabling a high frame rate, low power operation.


Colin Greatwood , Laurie Bose , Thomas Richardson , Walterio Mayol-Cuevas , Robert Clarke, Jianing Chen , Stephen J. Carey and Piotr Dudek. Towards drone racing with a pixel processor array. IMAV 2019.

https://personalpages.manchester.ac.uk/staff/p.dudek/papers/greatwood-imav2019.pdf


High-speed Light-weight CNN Inference via Strided Convolutions on a Pixel Processor Array

BMVC 2020 www.bmvc2020-conference.com/conference/papers/paper_0126.html

Performance, storage, and power consumption are three major factors that restrict the use of machine learning algorithms on embedded systems. However, new hardware architectures designed with visual computation in mind may hold the key to solving these bottlenecks. This work makes use of a novel visual device: the pixel processor array (PPA), to embed a convolutional neural network (CNN) onto the focal plane. We present a new high-speed implementation of strided convolutions using binary weights for the CNN on PPA devices, allowing all multiplications to be replaced by more efficient addition/subtraction operations. Image convolutions, ReLU activation functions, max-pooling and a fully-connected layer are all performed directly on the PPA’s imaging plane, exploiting its massive parallel computing capabilities. We demonstrate CNN inference across 4 different applications, running between 2,000 and 17,500 fps with power consumption lower than 1.5W. These tasks include identifying 8 classes of plankton, hand gesture classification and digit recognition.

RSS 2020 Workshop

Talk on Towards Autonomous drone racing using the SCAMP pixel processor array. RSS Workshop on Perception and Control for Fast and Agile Super-Vehicles

July 12th, RSS 2020

A Camera That CNNs: Towards Embedded Neural Networks on Pixel Processor Arrays

Our paper "A Camera That CNNs: Towards Embedded Neural Networks on Pixel Processor Arrays " has been accepted as oral for ICCV 2019 and as a conference demo.

This work takes a first step embeddeding Neural Networks directly onto the focal plane of a sensor.

Arxiv version: https://arxiv.org/abs/1909.05647

Visual Target Tracking with a parallel visual processor

Quadrotor tracking visual target using SCAMP. The SCAMP returns just the bounding box (x,y,height,width) of the target location in the image plane. The onboard controller fuses this information with the IMU and height data to calculate the distance to the target.

Conference Video "Tracking control of a UAV with a parallel visual processor"

Visual Odometry with a Pixel Processor Array

Our paper "Visual Odometry for Pixel Processor Arrays" has been accepted in ICCV 2017 for oral presentation.

We present an approach of estimating constrained ego-motion on a Pixel Processor Array (PPA). These devicesembed processing and data storage capability into the pixels of the image sensor, allowing for fast and low powerparallel computation directly on the image-plane. Rather than the standard visual pipeline whereby whole images aretransferred to an external general processing unit, our approach performs all computation upon the PPA itself, withthe camera’s estimated motion as the only information output.

https://openaccess.thecvf.com/content_ICCV_2017/papers/Bose_Visual_Odometry_for_ICCV_2017_paper.pdf


Agile Reactive Navigation for A Non-Holonomic Mobile Robot Using A Pixel Processor Array

This work presents an agile reactive navigation strategy for driving a non-holonomic ground vehicle around a preset course of gates in a cluttered environment using a low-cost processor array sensor. This enables machine vision tasks to be performed directly upon the sensor’s image plane, rather than using a separate general-purpose computer. We demonstrate a small ground vehicle running through or avoiding multiple gates at high speed using minimal computational resources. To achieve this, target tracking algorithms are developed for the Pixel Processing Array and captured images are then processed directly on the vision sensor acquiring target information for controlling the ground vehicle. The algorithm can run at up to 2000 fps outdoors and 200fps at indoor illumination levels. Conducting image processing at the sensor level avoids the bottleneck of image transfer encountered in conventional sensors. The real-time performance of on-board image processing and robustness is validated through experiments. Experimental results demonstrate that the algorithm’s ability to enable a ground vehicle to navigate at an average speed of 2.20 m/s for passing through multiple gates and 3.88 m/s for a ‘slalom’ task in an environment featuring significant visual clutter.

Feature Extraction

scamp_feature_extraction.wmv
scamp_feature_extraction2.wmv

This work demonstrates high-speed low-power feature extraction implemented on a portable vision system based on the SCAMP-5 vision chip. This embedded system executes a parallelized FAST16 corner detection algorithm on the vision chip’s programmable on-focal-plane processor array to detect the corner points in the current image frame. The coordinates of these corner points are then extracted using the vision chip’s event address readout infrastructure. These coordinates, as sparse data, can then be output through any of the IO buses within the micro-controller in the vision system at low latency. The USB-powered (400mA) system is capable of outputting 250 features at 2300 frames per second (FPS) in ideal lighting conditions, while 1000 FPS can be achieved in an indoor environment. The system can be applied to the real-time control of agile robots and miniature aerial vehicles.

High Dynamic Range

Varying light levels across the image can make it difficult to see detail throughout the scene. SCAMP is capable of producing an HDR image prior to processing other vision algorithms to help deal with difficult lighting conditions.

The two images on the left show two extremes, first where the bright areas are properly exposed, the second where the darker areas are exposed.

Edge Detection

Visual image captured by SCAMP

Edge detection returned by SCAMP

Info about edge detection to follow

Image Scaling, Rotation and other Transformations

Tracking at 100,000 Frames Per Second

The SCAMP chip has been exploited to conduct real-time image processing operations at 100,000fps, locating a closed-shape object from amongst clutter. Further detail on this work is presented here: http://ieeexplore.ieee.org/abstract/document/6578654