The 2024 NeurIPS Workshop on Fine-Tuning in Modern Machine Learning: Principles and Scalability calls for 4-8 pages paper (references and appendix not included) of high-quality contributions on algorithmic advance, mathmetical foundation, empirical observations of fine-tuning.
LaTeX style file: Overleaf Teamplate or GoogleDrive or download (NeurIPS'24 template). ICLR'25 template is also allowable. NB: The page limit for the main text is with 4-8 pages.
Key topics include but are not limited to:
Exploration of new methodology for fine-tuning of various strategies, architectures and systems, from low-rank representation to sparse representation, from deep neural networks to LLMs, from algorithmic design to hardware design.
Theoretical foundations of fine-tuning, e.g. approximation, optimization, and generalization from the perspective of transfer learning, deep learning theory, RLHF. Besides, theoretical understanding of low-rank representation from sketching and signal recovery are also welcome.
Works that propose new experimental observations that can help advance our understanding of the underlying mechanisms of fine-tuning, a discrepancy between existing theoretical analyses and practice, explainability and interpretability of fine-tuning in scientific contexts.
The topics are not limited to fine-tuning, LLMs. Any topic on theoretical and/or empricial results for understanding and advancing modern practices for efficiency in machine learning is also welcome.