Class-Wise Buffer Management for Incremental Object Detection An Effective Buffer Training Strategy
[ICASSP'24] [paper]
Who Should Read This Paper
Researchers and engineers interested in incremental learning for object detection
Anyone tackling “catastrophic forgetting” and looking for memory-efficient solutions
Practitioners who need to add new classes continuously (e.g., new products in retail, novel objects in robotics) without re-training from scratch
What the Paper Covers
Guarantee Minimum (GM)
Ensures each class is sufficiently represented in the replay buffer, preventing severe data imbalanceHierarchical Sampling
Chooses or replaces buffer samples based on (1) the number of unique labels, and (2) the loss (how “hard” the sample was for the model)Circular Experience Replay (CER)
Alternates training between new dataset and the buffer to reduce forgetting, leveraging previous knowledge more effectively
Real-World Applications
Frequent Model Updates: Ideal for dynamic scenarios like autonomous driving, CCTV monitoring, or industrial inspection, where new object types appear regularly
Resource-Constrained Environments: Saves time and GPU costs by avoiding full retraining yet maintaining older classes’ accuracy
Key Strengths
Preserves Old Classes: Minimizes catastrophic forgetting by maintaining class diversity in the buffer
Efficient Buffer Usage: Shows solid performance even with small replay buffers (around 1% of the dataset)
Simple but Effective: Easy to integrate into existing pipelines; only a few hyperparameters need tuning
Empirical Validation: Demonstrates state-of-the-art results on MS COCO, outperforming other replay-based methods