Call for Papers
We seek papers on all aspects of learning from small sample sizes, from any problem domain where this issue is prevalent (e.g. bioinformatics and omics, machine vision, anomaly detection, drug discovery, medical imaging, multi-label classification, multi-task classification, density-based clustering/density estimation, and others).
In particular:
Theoretical and empirical analyses of learning from small samples:
Which properties of data support, or prevent, learning from a small sample?
Which forms of side information support learning from a small sample?
When do guarantees break down? In theory? In practice?
Techniques and algorithms targeted at small sample size learning, including, but not limited to:
Semi-supervised learning.
Transfer learning.
Representation learning.
Dimensionality reduction.
Application of domain knowledge/informative priors.
Sparse methods.
Reproducible case studies.
Submit at: