From Theory to Practice: Workshop on Large Language and Foundation Models (WLLFM'24)
In conjunction with the IEEE BigData Conference 2024
Washington DC, USA
16th of December 2024
Venue: Hyatt Regency Washington on Capitol Hill 400 New Jersey Avenue, NW Washington, D.C. 20001 United States
Room: Congressional D
Schedule
News
[02 Dec 24] The schedule has been published.
[28 Nov 24] The date for our workshop has been published: It is going to be the 16th of December (Monday). See also here: https://www3.cs.stonybrook.edu/~ieeebigdata2024/developing_schedule.html
[15 Nov 24] The review process is complete and we thank all reviewers for their work. The notifications will be sent out today.
[15 Nov 24] Exact workshop date will be announced as soon as the registration ends.
[24 Oct 24] Submission link and information about the formatting instructions have been updated.
[24 May 24] After a very successful first iteration, we are happy to inform you that the workshop on Large Language and Foundation Models will be hosted in conjunction with the IEEE BigData Conference 2024 in Washington DC this year! This year's workshop will be organized by Rafet Sifa (Lamarr Institute for Machine Learning and Artificial Intelligence, University of Bonn and Fraunhofer IAIS Germany), Dhaval Patel (IBM Research US), Linsey Pang (Salesforce US), Wei Liu (University of Technology Sydney Australia) and Tobias Deußer (Fraunhofer IAIS Germany).
About the workshop
Large language and foundation models have had a significant impact on the field of Artificial Intelligence. These models excel at complex language-related tasks like text generation, sentiment analysis, machine translation, and question answering, often surpassing human performance. However, there is still a gap between theoretical understanding and practical implementation. To bridge this gap, our workshop brings together researchers, practitioners, and industry experts to share experiences, insights, and best practices in leveraging these models. The workshop aims to foster collaboration between academia and industry, providing a platform to discuss advancements, address challenges, and explore opportunities in the field of foundation models and large language models. Additionally, this year's workshop will feature keynote presentations from renowned experts and a group discussion on problem-solving with large language models, complementing the innovative research papers that will serve as a basis for discussion.
Call for Papers and Submission
Paper submission link: https://wi-lab.com/cyberchair/2024/bigdata24/scripts/submit.php?subarea=S42&undisplay_detail=1&wh=/cyberchair/2024/bigdata24/scripts/ws_submit.php
Papers should be submitted single blind.
See below for updates about paper formats and templates.
The topics of interest are, but not limited to:
Model Training and Optimization:
Techniques to deal with hallucinations
Training data for LLMs
Efficient and stable techniques for training and finetuning LLMs
Scalable approaches for distributed model training
Middleware for scale out data preparation for LLM training
Workflow orchestration for end-to-end LLM life cycle
Resource management for compute and energy efficient model training
Representation learning
Model Utilization and Integration:
Using LLMs effectively as tools for Reinforcement Learning or search
Enhancing LLM capabilities by using external tools such as search engines
Visual Prompt Tuning and in-context learning
Enable easy experimentation with high utilization to train foundational models in the cloud
Strategies to scale resources for training/fine-tuning foundational models
Instruction tuning including generation of instruction tuning data
Parallel training: data model tensor (attention and weights)
Distributed workflows for data cleansing and model usage (Langchain)
Principled AI
Investigating reasoning capabilities of LLMs
Retrieval Augmented Generation
Alternative architectures such as State Space Models
Compact Language Models and Knowledge Distillation:
Knowledge representations for training small/compact language models
Evaluation of different teacher-student distillation and model compression strategies
Techniques for efficient data encoding to maintain linguistic properties in compact models
Deployment of lightweight models in resource-constrained environments
Case studies on the effectiveness in various NLP tasks
Application-Specific Models:
Math LLMs
Multimodal Foundation Models
Trustworthy Foundation Models
Large-scale Visual Foundation Models
Timeseries foundation models for forecasting, prediction and control
Multi-Agent System using LLMs
Recommender systems using LLMs
Knowledge management using LLMs
Knowledge Incorporation and Adaptation:
Approaches to deal with knowledge recency to effectively update knowledge within LLMs
Incorporating domain knowledge in LLMs
Evaluation and Benchmarking:
Additional benchmarks to fill gap between human and automatic reference-based evaluation
Paper formats are:
Long papers: up to 10 pages including all figures, tables, and references
Short and vision papers: up to 6 pages including all figures, tables, and references
Papers should be submitted single blind.
All papers must be submitted in the IEEE conference format:
Official templates: https://www.ieee.org/conferences/publishing/templates.html
Overleaf templates: https://www.overleaf.com/latex/templates/ieee-conference-template/grfzhhncsfqn
For Tutorials: Researchers and practitioners are invited to submit suggestions for tutorials. Please contact us via Rafet.Sifa@iais.fraunhofer.de with the title WLLFM Tutorials
Important Dates
Dec 16, 2024, Workshop date
Oct 31, 2024: Due date for full workshop papers submission
Nov 15, 2024: Notification of paper acceptance to authors
Nov 22, 2024: Camera-ready of accepted papers
Dec 15-18 2024: Workshops (the exact workshop date will be determined based on the availability of the keynote speakers)
Organization
Workshop Chairs
Rafet Sifa (Lamarr Institute for Machine Learning and Artificial Intelligence, University of Bonn and Fraunhofer IAIS Germany)
Dhaval Patel (IBM Research US)
Linsey Pang (Salesforce US)
Wei Liu (University of Technology Sydney Australia)
Publication Chair
Tobias Deußer (Fraunhofer IAIS Germany)
Program Committee
Lucie Flek, Lamarr Institute For Artificial Intelligence and Machine Learning, Germany
Christian Bauckhage, Lamarr Institute For Artificial Intelligence and Machine Learning, Germany
Özlem Uzuner, George Mason University, USA
Armin Berger, University of Bonn, Germany
Aashish Jain, Salesforce, USA
Zian Wang, Stony Brook university, USA
Qiushui Xu , Penn State University, USA
Qikai Yang, UIUC, USA
Zheng Liu, Northeastern University, USA
Tingting Tang, University of Southern California, USA
Bo Yuan, Georgia Institute of Technology , USA
Yunzhe Wang, University of Southern California, USA
Yong Liu, Salesforce, USA
Mounika Kamsali Veera, Walmart, USA
Lisa Pucknat, AXA, Germany
Pengfei Li, Visa Research, USA
Surya Lakshmi Sujitha Pasumarty, Albertsons, USA
Maren Pielka, Fraunhofer IAIS, Germany
Lorenz Sparrenberg, University of Bonn, Germany
Yingfan Wang, Duke University, USA
Tobias Uelwer, Fraunhofer IAIS, Germany
Tian Long Xu, Squirrel AI Learning, USA
Hao Yan, George Mason University, USA
Mingxuan Yang, Brown University, USA
Dezhi Yu, UC Berkeley, USA
Haodong Zhang, NYU, USA
Chenyang Zhao, UCLA, USA
Contact
For questions and comments please contact Rafet Sifa at: Rafet.Sifa@iais.fraunhofer.de