The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.
The widespread integration of Large Language Models (LLMs) into real-world systems has introduced new challenges that demand urgent attention from the machine learning, cybersecurity, and systems research communities. While LLMs demonstrate exceptional performance in a variety of natural language processing tasks and are increasingly combined with other modalities such as vision and audio, their deployment in high-stakes environments, including healthcare, law enforcement, autonomous vehicles, and critical infrastructure - exposes serious vulnerabilities that can no longer be ignored.
This special session will focus on the technical foundations, emerging risks, and defense mechanisms surrounding the robustness and security of LLMs. Theoretical advancements and empirical studies have shown that even the most capable LLMs are susceptible to manipulation through prompt injection, adversarial queries, or syntactic perturbations that bypass safety mechanisms and generate harmful or misaligned outputs. Moreover, model performance often deteriorates sharply under distributional shifts or input noise, making them brittle in real-world conditions. In the multimodal setting, where LLMs interact with vision encoders or other sensory modules, the attack surface becomes significantly broader, raising complex challenges in cross-modal consistency, modality-specific perturbation, and unified threat modeling. Another growing concern is the leakage of sensitive information through model outputs.
The ROSE-LLM special session aims to address these interconnected concerns by creating a dedicated platform for advancing the state of research in secure, robust, and trustworthy LLM development. We invite contributions that propose new threat models, characterize vulnerabilities in both text-only and multimodal LLMs, and develop principled defense strategies grounded in adversarial training, robust optimization, certified defense, data obfuscation, or secure model fine-tuning. Equally important are efforts that propose systematic evaluation protocols, scalable benchmarks, and toolkits for measuring LLM behavior under adversarial and real-world stress conditions. We are also interested in system-level studies that explore the deployment of LLMs on edge devices, federated environments, or distributed systems with strict privacy, latency, and compliance constraints. Moreover, we are also interested in how foundation models can be used for anomaly detection.
This special session seeks to unite researchers across machine learning theory, security, privacy, and AI systems design, providing a forum to exchange ideas, share tools, and establish best practices through a half-day long special session. With the combination of technical paper presentations, and interactive sessions, ROSE-LLM will catalyze a robust dialogue around the future of secure language modeling.
Paper Submission Deadline: August 20, 2025
Notification of Acceptance: September 10, 2025
Camera-Ready Submission & Author Registration Deadline: September 20, 2025
We invite original research contributions, case studies, benchmarks, and position papers in areas including but not limited to:
Threats and Vulnerabilities in LLMs (e.g., prompt injection, jailbreaks, model inversion)
Robustness of LLMs under adversarial and distributional shifts
Adversarial Attacks on Instruction-Tuned and Open-Weight LLMs
Defense Strategies: Adversarial Training, Certified Defenses, Secure Fine-Tuning
Multimodal LLM Security: Threat modeling across vision, text, and audio
Leakage of Sensitive Information through Model Outputs
Efficient Fine-Tuning, Quantization, and Secure Adaptation of LLMs
Evaluation Benchmarks and Stress Testing for LLMs
Deployment in Edge and Distributed Systems with Latency and Compliance Constraints
Foundations of Trustworthy Anomaly Detection using LLMs
LLM Misuse and Social Impact Studies
Scalable and Trustworthy Applications of Foundation Models
Limitations of Current Foundation Models in Detecting Anomalies in Complex and Noisy Datasets
Future Directions for Anomaly Detection using Foundation Models
Best Practices and Governance for LLM Security
Accepted and registered papers will be published in the IEEE ICMLA 2025 Conference Proceedings and submitted for inclusion in IEEE Xplore.
Submitted manuscripts must adhere to the IEEE formatting requirements. Authors are encouraged to use the official IEEE manuscript templates for preparing their submissions.
Papers should be submitted through the CMT submission system, selecting the track titled:
“Special Session on Robustness and Security of Large Language Models (ROSE-LLM 2025)”.
Regular papers should be up to 6 pages long. Authors of regular papers can add up to 2 extra pages, at an additional cost of $50 per page. A regular paper cannot be more than 8 pages long. References and any other additional material must be included in this number of pages. All the papers will go through double-blind peer review process.
At least one author of each accepted paper must register for the conference and present the work at the session.
Information and detailed instructions for submitting papers can be found at How to Submit, and those for registration can be found at Register Here.
All accepted papers must be presented in-person (at ICMLA 2025) by one author who must complete registration.
Papers submitted for reviewing should conform to IEEE specifications. Manuscript templates can be downloaded from IEEE website.
All submissions will undergo a double-blind peer review process. Therefore, authors must ensure that their names, affiliations, and any identifying information are not included in the manuscript. References to the authors’ own prior work should be made in the third person, and care must be taken to avoid revealing identities in the text, figures, metadata, or links.
o Authors’ names and affiliations should not appear in the submitted papers.
o Authors’ prior work should be cited in the third person.
o Authors should take care and avoid revealing their identities and/or institutions in the paper’s text, figures, links, etc.
o Any papers that do not adhere to the double-blind peer review policy will be rejected without peer review.
For detailed instructions, please refer to the ICMLA 2025 guidelines on How to Submit.
Zhen Ni
Florida Atlantic University
Hasib-Al Rashid
Amazon Web Services (AWS)
Musfiqur Sazal
Oak Ridge National Lab
Minhaj Alam
University of North Carolina at Charlotte
Abdur R Shahid
Southern Illinois University
Khaled Mohammed Saifuddin
Northeastern University
Nur Imtiazul Haque
University of Cincinnati
Deepti Gupta
Texas A&M University - Central Texas
Alvi Ataur Khalil
Florida International University
Adnan Maruf
Missouri State University
Khandaker Mamun Ahmed
Dakota State University
Md Zarif Hossain
Florida Atlantic University
Awal Ahmed Fime
Florida Atlantic University
Saika Zaman
Florida Atlantic University
The ROSE-LLM special session is part of (co-located with) IEEE ICMLA 2025, which will be held in Boca Raton Marriot at Boca Center, Florida, USA. All accepted papers will be presented during the conference as part of the official program, providing authors with the opportunity to engage with a broad interdisciplinary community of researchers and practitioners.
For more details, please visit the official conference website: https://www.icmla-conference.org