1. Structural Priors for Effective Learning
Encode domain-specific structures to improve performance and interpretability
- Visual structure (image):
InstaGAN (ICLR'19), OACon (NeurIPS'21), OAMixer (CVPRW'22), CAST (ICLR'24), SHED (tbd), F-CAST (tbd)
- Temporal structure (video, language):
DIGAN (ICLR'22), HOMER (ICLR'24), Simba (TMLR'25)
- Structured reasoning (gen models):
SuRe (ICLR'24), PRIS (tbd)
- Physical structure (robotics):
EIT-NN (IROS'19), EIT-NN-v2 (T-RO'21)
- Relational structure (graph, table):
LoGe (ICMLW'21), DPM-SNC (NeurIPS'23)
2. Statistical Priors for Reliable Learning
Estimate and leverage dataset statistics to improve robustness and generalization
- Robust representation (outlier, etc):
CSI (NeurIPS'20), MASKER (AAAI'21), RoPAWS (ICLR'23)
- Reliable generation (fairness, etc.):
GOLD (NeurIPS'19), FICS (TMLR'23), GFG (tbd)
- Extension to novel domains & contexts:
FreezeD (CVPRW'20), S-CLIP (NeurIPS'23), OAK (CVPR'25), ZEPA (tbd)
- Analyzing & leveraging foundation models:
B2T (CVPR'24), PDS (tbd), Idis (tbd)
- Efficient model design:
LAP (ICLR'20), LAMP (ICLR'21)