FLCA - Federated Learning for Critical Applications
Workshop@AAAI26
26 January 2026
Workshop@AAAI26
26 January 2026
Introduced in 2016, Federated Learning (FL) has rapidly emerged as a key privacy-preserving distributed learning paradigm. Different institutions can train more robust models, benefiting from the federation, without sharing raw training data. Despite traction in research, FL’s real-world use in critical applications remains limited. This is especially true in fields such as healthcare, autonomous driving, and finance, where privacy concerns, robustness to failures or attacks, lack of theoretical guarantees, and infrastructural limitations pose major challenges. Despite the enthusiasm around FL, only a few systems are truly operational in production environments.
This workshop focuses on the unique challenges of deploying FL systems in real-world, safety- or privacy-critical settings, environments where failure is not an option. Unlike existing FL workshops that emphasize general methods, FLCA emphasizes FL under realistic deployment constraints, including privacy guarantees, adversarial robustness, system-level failures, and regulatory compliance. Our emphasis is on practical FL systems that are scalable, robust, and aligned with real-world operational and infrastructural constraints.
This workshop benefits researchers from both industry and academia, exchanging theoretical knowledge and use-case scenarios that might help design federated learning systems for critical applications.
We encourage both applicative and theoretical contributions, including studies on specific settings and benchmarking tools.
Why FLCA, and why now? While prior workshops have addressed algorithmic advances and scaling FL to large models, the hard path to real-world deployment remains critically underexplored. FLCA is the first workshop that places the deployment in critical applications at the center, uniting academic theory with operational constraints from high-stakes domains like healthcare, finance, and autonomous systems. There is a general mismatch between academic developments in FL and the challenges in deployment. Therefore, FCLA addresses robustness, fairness, and resource constraints in the context of critical applications. At the same time, it also addresses failure modes, regulatory compliance, and deployment bottlenecks. These are issues that determine whether FL succeeds or fails in practice.
Opening remarks [09:00 — 09:05]
Speaker: TBD: organizers (University of TBD)
Keynote [9.05-9.45]
Speaker: TBD: organizers (University of TBD)
Keynote [9.45-10.25]
Speaker: TBD: organizers (University of TBD)
Coffee break [10.30-11.00]
Keynote [11.00-11.40]
Speaker: TBD: organizers (University of TBD)
Technical presentations [11.40-12.30]
Speaker: TBD: organizers (University of TBD)
Speaker: TBD: organizers (University of TBD)
Speaker: TBD: organizers (University of TBD)
Lunch [12.30-14]
Keynote [14.00-14.40]
Speaker: TBD: organizers (University of TBD)
Keynote [14.40-15.20]
Speaker: TBD: organizers (University of TBD)
Technical presentations [15.20-15.35]
Speaker: TBD: organizers (University of TBD)
Coffee break [15.35-16.00]
Panel session [16.00-16.40]
Speaker: TBD: organizers (University of TBD)
Poster session [16.40-17.30]
Topics of interest include, but are not limited to:
Federated, distributed, and decentralized learning approaches for critical applications such as healthcare, advertising, blockchain, and social networks.
Infrastructure and system design for deploying real-world FL pipelines.
Techniques for FL across different learning paradigms, such as multi-task learning, meta-learning, semi-supervised learning, self-supervised learning, continual learning, ...
Theoretical approaches with realistic assumptions for practical settings.
FL with heterogeneous and unbalanced (non-IID) data distributions.
Security and privacy in FL systems, including differential privacy, adversarial robustness, trustworthiness, and Machine Unlearning at scale.
Secure multi-party computation, trusted execution environments, and high-performance computing for federated computations.
Variants of FL and decentralized alternatives, including vertical FL, split learning, and gossip learning.
Other non-functional aspects in FL for critical use cases, such as fairness, explainability, and personalization.
Tools and resources (e.g., benchmark datasets, software libraries, ...)
Submission Due Date: Friday, October 24th, 2025
Notification of Acceptance: Friday, November 7th, 2025
Early Registration Rate Ends: Sunday, November 16th, 2025 Early registration rate ends
Camera-ready Papers Due: TBD
Workshop Date: Monday, January 26th, 2026, Singapore
We invite submissions of original research on the previously mentioned aspects of Federated Learning (see the complete list of topics above). Submissions should be best-effort anonymized and follow the AAAI'26 template. The review process will be double-blind. We accept both short papers (4 pages + references + optional appendix) and long papers (8 pages + references + optional appendix). Submissions exceeding the long papers format will be desk-rejected.
Submissions are processed in OpenReview: https://openreview.net/group?id=AAAI.org/2026/Workshop/FLCA
FLCA workshop does not have formal proceedings, i.e., it is non-archival. Accepted papers and their review comments will be posted on OpenReview in public (after the end of the review process), while rejected and withdrawn papers and their reviews will remain private.
We welcome submissions from novel research, ongoing (incomplete) projects, as well as recently published results.
All accepted papers are expected to be presented in person. The workshop will not provide support for virtual talks.
Accepted papers will be presented either as talks or posters. There is no difference in scientific quality between talks and posters. The reviewing and selection processes are identical. The selection of papers will differentiate between oral and poster presentations according to the topics, and not to the level of quality.
Professor
Lamarr Institute for Artificial Intelligence and Machine Learning
TU Dortmund
Dortmund, Germany