Poster Session I (Morning session)
1 ImageNet-RIB Benchmark: Large Pre-Training Datasets Don't Guarantee Robustness after Fine-Tuning
2 Best Unpacking DPO and PPO: Disentangling Practices for Learning from Preference Feedback
3 Ensembling Finetuned Language Models for Text Classification
4 One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation
5 CRAFT Your Dataset: Task-Specific Synthetic Dataset Generation Through Corpus Retrieval and Augmentation
6 Understanding Visual Concepts Across Models
7 A Layer Selection Approach to Test Time Adaptation
8 ActNAS : Generating Efficient YOLO Models using Activation NAS
9 On Efficient Distillation from LLMs to SLMs
10 Towards Exploring Continual Fine-Tuning for Enhancing Language Ability in Large Language Model
11 Fine tuning language models to align fidelity and efficiency of generative retrieval in multi-turn dialogues
12 PAL: Pluralistic Alignment Framework for Learning from Heterogeneous Preferences
13 A Tensor-based Convolutional Neural Network for Small Dataset Classification
14 Generalizing Alignment Paradigm of Text-to-Image Generation with Preferences through $f$-divergence Minimization
15 Parasite Networks: Transfer Learning in Resource-Constrained Domains
16 Flat-LoRA: Low-Rank Adaption over a Flat Loss Landscape
17 Estimating Effects of Tokens in Preference Learning
18 Efficient Fine-Tuning of CNN-based Foundation Models for Segmentation in 3D Medical Images
19 FourierKAN outperforms MLP on Text Classification Head Fine-tuning
20 E-Tamba: Efficient Transformer-Mamba Layer Transplantation
21 Inducing Semi-Structured Sparsity by Masking for Efficient Model Inference in Convolutional Networks
22 Token Pruning using a Lightweight Background Aware Vision Transformer
23 Improving LLM Generation with Inverse and Forward Alignment: Reward Modeling, Prompting, Fine-Tuning, and Inference-Time Optimization
24 Inconsistencies In Consistency Models: Better ODE Solving Does Not Imply Better Samples
25 Faster, More Efficient RLHF through Off-Policy Asynchronous Learning
26 Comparing Bad Apples to Good Oranges: Aligning Large Language Models via Joint Preference Optimization
27 Online Fine-Tuning with Uncertainty Quantification for Offline Pre-Trained Agents
28 Addax: Resource-Efficient Fine-Tuning of Language Models with a Combination of Forward-Backward and Forward-Only Passes
29 Learning the Regularization Strength for Deep Fine-Tuning via a Data-Emphasized Variational Objective
30 Towards Long-Context Time Series Foundation Models With A Handful Of Additional Parameters
31 What Causes a Disparate Impact in a Quantized Model?
32 Investigating the Role of Fine-Tuning in Addressing the Gap Between Synthetic and Real Data in Generative Foundation Models
33 Model Soup for Better RLHF: Weight Space Averaging to Improve Alignment in LLMs
34 Balancing Cost and Effectiveness of Synthetic Data Generation Strategies for LLMs
35 Flexora: Flexible Low-Rank Adaptation for Large Language Models
36 REACT: Residual-Adaptive Contextual Tuning for Fast Model Adaptation in Cybersecurity
37 Adapting Language Models via Token Translation
38 Efficient Fine-Tuning of Behavior Cloned Policies with Reinforcement Learning from Limited Demonstrations
39 Towards Natural Machine Unlearning
40 Self-Stitching: Widely Applicable and Efficient Transfer Learning Using Stitching Layer
41 MPLoRA: Orthogonal Multi-Path Low-Rank Adaptation for Parameter Efficient Fine-Tuning
42 An empirical study of CLIP fine-tuning with similarity clusters
43 XoRA: Expander Adapted LoRA Finetuning
44 Early Exiting in Deep Neural Networks via Dirichlet-based Uncertainty Quantification
Poster Session II (Afternoon session)