About
Building upon the success of the inaugural Texas Colloquium on Distributed Learning (TL;DR 2023 and TL;DR 2025) that was held at Rice University and the successful workshop on Applied Algorithms for ML (https://aaforml.com/) in Paris, we propose to organize a workshop that will delve deeper into the evolving field of large-scale learning and AI.
We envision this two-and-a-half-day summit to take place in the summer of 2025 at Rice Global Paris Center, bringing together approximately 60-70 international researchers from academia and industry.
Feature learning, a cornerstone of modern ML, involves techniques that allow a system to automatically discover the representations needed for feature detection or classification from raw data. This process is critical in the development of models that can effectively generalize from limited or complex datasets. Large Language Models (LLMs), which have shown remarkable success in generating human-like text and other forms of media, rely heavily on sophisticated feature learning to understand and generate coherent, contextually appropriate responses. Furthermore, the intersection of optimization and learning is paramount, especially in the context of distributed learning, where computational resources are distributed, and data privacy is a primary concern. Optimization techniques that are specifically tailored for such settings can significantly enhance the efficiency and effectiveness of learning, thereby enabling the practical deployment of advanced machine learning models on a large scale, including LLMs.
The workshop aims to address critical advancements and challenges in large-scale learning, with a special emphasis on understanding feature learning, the capabilities, and complexities of Large Language Models (LLMs), and the crucial intersection of optimization and machine learning in distributed environments. By focusing on these advanced topics, we seek to push the boundaries of what distributed learning can achieve and foster an environment of innovation and collaboration. By bringing together experts in these fields, the workshop aims to foster an environment of innovation and collaboration that will lead to new insights and advancements in federated learning technologies
Aim of the symposium
1. Explore advanced feature learning techniques in distributed settings
2. Investigate the integration and scalability of LLMs in distributed environments
3. Advance optimization methods for efficient distributed learning
4. Address privacy, fairness, and security concerns in distributed learning
5. Examine continual learning and mixture of models in distributed contexts
Rice CS
Rice CMOR
Rice ECE
Rice CS
Rice Global Paris Center
Rice Global Paris Center
Rice Global Paris Center