Half Day Workshop - Afternoon
Room 201
13:45–14:00
14:00–14:45
Aaron Klein
14.45 –15:30
Exploiting LLMs to Improve Recommendation Diversity, Basar Yilmaz (Middle East Technical University), Ali Eren Çankaya (Middle East Technical University), Ismail Sengor Altingovde (Middle East Technical University), Pinar Karagoz (Middle East Technical University), and Ismail Hakki Toroslu (Middle East Technical University)
Exploring Approaches for Detecting Memorization of Recommender System Data in Large Language Models, Antonio Colacicco (Politecnico di Bari), Vito Guida (Politecnico di Bari), Dario Di Palma (Politecnico di Bari), Fedelucio Narducci (Politecnico di Bari), and Tommaso Di Noia (Politecnico di Bari).
Identifying Business Trends from Stock Market Data utilizing Domain-Specialized Sentence-RoBERTa and BERTopic Technique, Ye Lim Jung (Korea Institute of Science and Technology Information), and Hyoung Sun Yoo (Korea Institute of Science and Technology Information)
15:30–16:00
16.00–17:00
CliqueParcel: An Approach For Batching LLM Prompts That Jointly Optimizes Efficiency And Faithfulness, Jiayi Liu (Purdue University), Tinghan Yang (Purdue University), and Jennifer Neville (Purdue University/Microsoft Research)
Self-Adapted Entity-Centric Data Augmentation for Discontinuous Named Entity Recognition, Wen-Fang Su (Galaxy Software Services), Hsiao-Wei Chou (National Taiwan University of Science and Technology), and Wen-Yang Lin (National University of Kaohsiung)
17:00 - 17:20
Jacek Golebiowski
17:20 - 17:30
Designing neural networks involves numerous architectural choices that typically require significant human expertise and extensive trial-and-error. This challenge becomes particularly pronounced when developing compact models for deployment in resource-constrained environments, where efficiency is as critical as accuracy.
Neural Architecture Search (NAS) has emerged as a powerful framework to automate architecture design, often surpassing manually engineered models. In hardware-aware and multi-objective settings, NAS enables the discovery of Pareto-optimal architectures that balance competing requirements such as validation performance, memory footprint, and inference latency.
In this talk, I will provide an overview of NAS, with a focus on its application to multi-objective optimization. I will also highlight recent work on leveraging NAS to design efficient architectures for small-scale language models, demonstrating its potential to accelerate progress toward practical, lightweight AI systems.
images, notes, reports—but can't afford to send sensitive information to the cloud. Small Language Models (SLMs) run locally on your own hardware, keeping data private while delivering fast results at a fraction of the cost. Traditionally, building custom AI for production tasks requires expert teams and months of development. AutoML addresses part of this by automating model selection and training, but practitioners still face the bottleneck of manually labeling thousands of training examples. This is why ChatGPT succeeded: it only asks you to describe what you want, not provide labeled datasets.
We believe custom models need to match this experience, so in this session we present our model training pipeline that extends AutoML to data preparation itself. You define the problem, and a larger AI "teacher" automatically generates and refines training examples to create a specialized "student" model tailored to tasks like request triage, API chat interfaces, and data transformations. We'll dive deeper into the structure of the generated data and demonstrate how to effectively navigate data generation by controlling the latent variables that define each datapoint.
To ensure a smooth program and balanced discussion, please follow the presentation guidelines below:
⏱ Presentation Format & Timing
Long Papers - 20 minutes total (15 minutes presentation + 5 minutes Q&A)
Short Papers - 15 minutes total (10 minutes presentation + 5 minutes Q&A)
🎤 Presentation Logistics
Please join your session in advance and send PDF or PPTX to merrafelice@.gmail.com.
A session chair will keep time and moderate questions. If your talk ends early, we will proceed with Q&A or transition to the next speaker.
We appreciate your contribution and look forward to your presentation!