1. Structural Priors for Effective Learning
Encode domain-specific structures to improve performance and interpretability
- Visual structure (image):
InstaGAN (ICLR'19), OACon (NeurIPS'21), OAMixer (CVPRW'22), CAST (ICLR'24), SHED (tbd), CAFT (tbd)
- Temporal structure (video, language):
DIGAN (ICLR'22), HOMER (ICLR'24), Simba (TMLR'25), ICBP (tbd)
- Structured reasoning (gen models):
SuRe (ICLR'24), PRIS (CVPR'26), STEP (tbd)
- Physical structure (robotics):
EIT-NN (IROS'19), EIT-NN-v2 (T-RO'21)
- Relational structure (graph, table):
LoGe (ICMLW'21), DPM-SNC (NeurIPS'23), MDR (tbd)
2. Statistical Priors for Reliable Learning
Estimate and leverage dataset statistics to improve robustness and generalization
- Robust representation (outlier, etc):
CSI (NeurIPS'20), MASKER (AAAI'21), RoPAWS (ICLR'23)
- Reliable generation (fairness, etc.):
GOLD (NeurIPS'19), FICS (TMLR'23), StayFair (tbd)
- Extension to novel domains:
FreezeD (CVPRW'20), S-CLIP (NeurIPS'23), ToMA (tbd)
- Adaptation to novel contexts:
- Analyzing & leveraging foundation models:
B2T (CVPR'24), PDS (ICLR'26), Idis (tbd), TO (tbd)
- Neural architecture design: