From Theory to Practice: Workshop on Large Language and Foundation Models (WLLFM'24)
In conjunction with the IEEE BigData Conference 2024
Washington DC, USA
December 2024
News
[24 May 24] After a very successful first iteration, we are happy to inform you that the workshop on Large Language and Foundation Models will be hosted in conjunction with the IEEE BigData Conference 2024 in Washington DC this year! This year's workshop will be organized by Rafet Sifa (University of Bonn and Fraunhofer IAIS Germany), Dhaval Patel (IBM Research US), Linsey Pang (Salesforce US), Wei Liu (University of Sydney Australia) and Tobias Deußer (Fraunhofer IAIS Germany).
About the workshop
Large language and foundation models have had a significant impact on the field of Artificial Intelligence. These models excel at complex language-related tasks like text generation, sentiment analysis, machine translation, and question answering, often surpassing human performance. However, there is still a gap between theoretical understanding and practical implementation. To bridge this gap, our workshop brings together researchers, practitioners, and industry experts to share experiences, insights, and best practices in leveraging these models. The workshop aims to foster collaboration between academia and industry, providing a platform to discuss advancements, address challenges, and explore opportunities in the field of foundation models and large language models. Additionally, this year's workshop will feature keynote presentations from renowned experts and a group discussion on problem-solving with large language models, complementing the innovative research papers that will serve as a basis for discussion.
Call for Papers and Submission
The topics of interest are, but not limited to:
Model Training and Optimization:
Techniques to deal with hallucinations
Training data for LLMs
Efficient and stable techniques for training and finetuning LLMs
Scalable approaches for distributed model training
Middleware for scale out data preparation for LLM training
Workflow orchestration for end-to-end LLM life cycle
Resource management for compute and energy efficient model training
Representation learning
Model Utilization and Integration:
Using LLMs effectively as tools for Reinforcement Learning or search
Enhancing LLM capabilities by using external tools such as search engines
Visual Prompt Tuning and in-context learning
Enable easy experimentation with high utilization to train foundational models in the cloud
Strategies to scale resources for training/fine-tuning foundational models
Instruction tuning including generation of instruction tuning data
Parallel training: data model tensor (attention and weights)
Distributed workflows for data cleansing and model usage (Langchain)
Principled AI
Investigating reasoning capabilities of LLMs
Retrieval Augmented Generation
Alternative architectures such as State Space Models
Compact Language Models and Knowledge Distillation:
Knowledge representations for training small/compact language models
Evaluation of different teacher-student distillation and model compression strategies
Techniques for efficient data encoding to maintain linguistic properties in compact models
Deployment of lightweight models in resource-constrained environments
Case studies on the effectiveness in various NLP tasks
Application-Specific Models:
Math LLMs
Multimodal Foundation Models
Trustworthy Foundation Models
Large-scale Visual Foundation Models
Timeseries foundation models for forecasting, prediction and control
Multi-Agent System using LLMs
Recommender systems using LLMs
Knowledge management using LLMs
Knowledge Incorporation and Adaptation:
Approaches to deal with knowledge recency to effectively update knowledge within LLMs
Incorporating domain knowledge in LLMs
Evaluation and Benchmarking:
Additional benchmarks to fill gap between human and automatic reference-based evaluation
Paper formats as well as submission instructions will be made available soon.
For Tutorials: Researchers and practitioners are invited to submit suggestions for tutorials. Please contact us via Rafet.Sifa@iais.fraunhofer.de with the title WLLFM Tutorials
Important Dates
Oct 31, 2024: Due date for full workshop papers submission
Nov 4, 2024: Notification of paper acceptance to authors
Nov 20, 2024: Camera-ready of accepted papers
Dec 15-18 2024: Workshops (the exact workshop date will be determined based on the availability of the keynote speakers)
Organization
Workshop Chairs
Rafet Sifa (University of Bonn and Fraunhofer IAIS Germany)
Dhaval Patel (IBM Research US)
Linsey Pang (Salesforce US)
Wei Liu (University of Sydney Australia)
Publication Chair
Tobias Deußer (Fraunhofer IAIS Germany)
Program Committee
TBD
Contact
For questions and comments please contact Rafet Sifa at: Rafet.Sifa@iais.fraunhofer.de