Diffusion Meets Few-shot Class Incremental Learning [ArXiv'25] [paper]
*Note: Please refer to the full paper for extended experiments, ablations, and other details.


Who Should Read This Paper


What the Paper Covers


Real-World Applications (From my perspective)


Key Strengths

   - Only ~6M parameters are trained; the large diffusion backbone remains frozen.  

   - Allows fast adaptation, avoiding large memory overhead or complex replay buffers.


Main Figure (Overall architecture)