Yan Lu†, Shiqi Jiang‡, Ting Cao‡, Yuanchao Shu‡
New York University†, Microsoft Research‡
In this paper, we propose a task-specific discrimination and enhancement module, and a model-aware adversarial training mechanism, providing a way to exploit idle resources to identify and transform pipeline-specific, low-quality images in an accurate and efficient manner. A multi-exit enhancement model structure and a resource-aware scheduler is further developed to make online enhancement decisions and fine-grained inference execution under latency and GPU resource constraints. Experiments across multiple video analytics pipelines and datasets reveal that our system boosts DNN object detection accuracy by 7.27-11.34% by judiciously allocating 15.81-37.67% idle resources on frames that tend to yield greater marginal benefits from enhancement.
We evaluate Turbo with three video analytics pipeplines (Glimpse, Vigil and Noscope) on two traffic video benchmarks (UA-DETRAC and AICity). We test each video analytics pipeline with three object detection models (EfficientDet-D0, YOLOv3 and Faster RCNN) and present all results as follows.
@inproceedings{lu22sensys,
author={Lu, Yan and Jiang, Shiqi and Cao, Ting and Shu, Yuanchao},
booktitle={ACM Conference on Embedded Network Sensor Systems (SenSys)},
title={{Turbo: Opportunistic Enhancement for Edge Video Analytics}},
year={2022},
}
Continuous, Real-Time Object Recognition on Mobile Devices. In SenSys'15. - Glimpse (temporal pruning based VAP)
Design and Implementation Implementation of a Wireless Wireless Video Surveillance System. In MobiCom'15. - Vigil (model pruning based VAP)
NoScope: Optimizing Neural Network Queries over Video at Scale. In VLDB'17. - Noscope (temporal + model pruning VAP)
EnlightenGAN: Deep Light Enhancement without Paired Supervision. In TIP'20. - EnlightenGAN (an GAN-based image enhancement for low-light image noise)