Seminar 2021
Superpixel-guided Iterative Learning from Noisy Labels for Medical Image Segmentation
YOLOv3: An Incremental Improvement
Fast AutoAugment
YOLO9000: Better, Faster, Stronger
Efficient-CapsNet: Capsule Network with Self-Attention Routing
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
You Only Look Once: Unified, Real-Time Object Detection
SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization
Knowledge Distillation: A Survey
Going Deeper with Convolutions
Language Models are Few-Shot Learners
Attention Is All You Need
Railroad is not a Train: Saliency as Pseudo-pixel Supervision for Weakly Supervised Semantic Segmentation
Deep Residual Learning for Image Recognition
DetectoRS: Detecting Objects with Recursive Feature Pyramid and Switchable Atrous Convolution
Rethinking the Heatmap Regression for Bottom-up Human Pose Estimation
Purifying Gaze Feature for Generalizable Gaze Estimation
Very Deep Convolutional Networks for Large-Scale Image Recognition
Meta Pseudo Labels
Visualizing and Understanding Convolutional Networks
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
ImageNet Classification with Deep Convolutional Neural Networks
RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation
Deep Face Recognition: A Survey
Path Aggregation Network for Instance Segmentation
A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning
Transformer: Attention is all you need
PolyTransform: Deep Polygon Transformer for Instance Segmentation
SSD:Single Shot MultiBox Detector