Adversarially Robust Object Detection

Toward Adversarially Robust Object Detection

Haichao Zhang Jianyu Wang

IEEE International Conference on Computer Vision (ICCV) 2019

Abstract

Object detection is an important vision task and has emerged as an indispensable component in many vision system, rendering its robustness as an increasingly important performance factor for practical applications. While object detection models have been demonstrated to be vulnerable against adversarial attacks by many recent works, very few efforts have been devoted to improving their robustness. In this work, we take an initial attempt towards this direction. We first revisit and systematically analyze object detectors and many recently developed attacks from the perspective of model robustness. We then present a multi-task learning perspective of object detection and identify an asymmetric role of task losses. We further develop an adversarial training approach which can leverage the multiple sources of attacks for improving the robustness of detection models. Extensive experiments on PASCAL-VOC and MS-COCO verified the effectiveness of the proposed approach.

A First Impression on Standard v.s Robust Object Detector

Standard v.s. robust detectors on clean and adversarial images. The adversarial image is produced using PDG-based detector attacks with perturbation budget 8 (out of 256). The standard model fails completely on the adversarial image while the robust model can produce reasonable detection results.

Analysis: A Multi-Task Perspective

One-stage detectors, which plays an essential role in different variants of detectors, can be viewed from a multi-task learning perspective:

One-stage detector architecture

Detection Attacks Guided by Task Losses

Many different attack methods for object detectors have been developed very recently [57, 32, 6, 11, 55, 23, 22, 31]. Although there are many differences in the formulations of these attacks, when viewed from the multi-task learning perspective, they have the same framework and design principle: an attack to a detector can be achieved by utilizing variants of individual task losses or their combinations.

Towards Adversarially Robust Detection

The Roles of Task Losses in Robustness

Mutual impacts of task losses and gradient visualization. Classification and Localization tasks impact each other, but are not fully aligned.

Adversarial Training for Robust Detection

The following formulation is developed motivated by the analysis above:

Results

Related Publications

Robust Object Detection

Haichao Zhang and Jianyu Wang, Towards Adversarially Robust Object Detection ICCV 2019 [Poster]

Robust Visual Recognition

Haichao Zhang, Jianchao Yang, Yanning Zhang, Nasser M. Nasrabadi and Thomas S. Huang,

Close the Loop: Joint Blind Image Restoration and Recognition with Sparse Representation Prior ICCV 2011