MST = Albuquerque local time
Time (MST) Time (GMT)
09:30 – 09:45 16:00 – 16:45 Opening remarks
09:45 – 10:30 16:45 – 17:30 Invited talk: Ana Marasovic
10:30 – 10:45 17:30 – 18:00 Coffee break
11:00 – 12:00 18:00 – 19:00 Poster session I
12:00 – 13:30 19:00 – 20:30 Lunch
13:30 – 14:15 20:30 – 21:15 Invited talk: Najoung Kim
14:15 – 15:30 21:15 – 22:30 Panel discussion
15:30 – 15:45 22:30 – 22:45 Coffee break
15:45 – 16:30 22:45 – 23:30 Invited talk: Akari Asai
16:30 – 17:30 23:30 – 00:30 Poster session II
17:30 – 17:45 00:30 – 00:45 Closing remarks
Poster Session I
DEPTH: Discourse Education through Pre-Training Hierarchically
Cross-Modal Learning for Music-to-Music-Video Description Generation
A Comparative Study of Learning Paradigms in Large Language Models via Intrinsic Dimension
Investigating Adapters for Parameter-efficient Low-resource Automatic Speech Recognition
Reverse Probing: Evaluating Knowledge Transfer via Finetuned Task Embeddings for Coreference Resolution
Amuro & Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models
State Space Models are Strong Text Rerankers
Poster Session II
Tracking Universal Features Through Fine-Tuning and Model Merging
Prompt Tuning Can Simply Adapt Large Language Models to Text Encoders
Choose Your Words Wisely: Domain-adaptive Masking Makes Language Models Learn Faster
Efficient Document-level Event Relation Extraction
Punctuation Restoration Improves Structure Understanding without Supervision
Large Language Models Are Overparameterized Text Encoders
Vocabulary-level Memory Efficiency for Language Model Fine-tuning
Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference
Decoding Dark Matter: Specialized Sparse Autoencoders for Interpreting Rare Concepts in Foundation Models
Enhancing Temporal Understanding in LLMs for Semi-structured Tables
From Argumentation to Deliberation: Perspectivized Stance Vectors for Fine-grained (Dis)agreement Analysis