Objectives
We invite submissions of research papers and works-in-progress that address various aspects of privacy issues in LLM and NLP systems. Topics of interest (relevant to NLP and LLM) include, but are not limited to,
Privacy-Preserving Techniques
Data Management, Anonymization, and Sanitization
Machine Unlearning
Adversarial attacks and Defences
Ethics, Regulatory Aspects, and Responsible AI
Fairness and Accountability
Interpretability and Transparency
AI Agents and Collaborations
Secure and/or Privacy-Preserving Distributed Machine Learning
Decentralized LLMs.
Evaluation and Metrics
Case studies (related to areas such as consistency check, code generation, bug finding, and prompt privacy) or implementations within specific domains and applications: manufacturing and IoT, power grid and energy, medical and healthcare
Emerging challenges in LLM deployment, e.g., knowledge distillation, RAG
We welcome original contributions that have not been published and are not currently under consideration by any other conference or journal. Submissions should be formatted according to the ACM SIGS format and should not exceed 12 pages, including references and appendices. All other formatting must follow the AsiaCCS2026 guidelines at https://asiaccs2026.cse.iitkgp.ac.in/call-for-papers/.
Important: The review process is double-blind, so papers should not include any identifying information, such as author names, affiliations, or acknowledgments.
Ethical Declaration and Consideration
All submitted papers must include a mandatory section addressing Ethical Declaration and Consideration. This section should outline how ethical guidelines have been followed, particularly in relation to the use of LLMs and NLP. Authors must explicitly discuss any ethical concerns, including data privacy, bias mitigation, and the involvement of human subjects or domain experts in their research. Papers without this section will not be considered for review.
Important Dates
Submission Deadline: 20 January 2026
Notification of Acceptance: 16 March 2026
Camera-Ready Deadline: 1 April 2026
Workshop Date: 2 June 2026
Nishanth Chandran is a Senior Principal Researcher at Microsoft Research, India. His research interests are in problems related to cryptography, secure computation, and AI security. Prior to joining Microsoft Research, India, Nishanth was a Researcher at AT&T Labs, and before that, he was a Post-doctoral Researcher at Microsoft Research Redmond.
Nishanth is a recipient of the 2010 Chorafas Award for exceptional achievements in research, and his research has received coverage in science journals and in the media at venues such as Nature and MIT Technology Review. He has published several papers in top computer science conferences and journals such as Crypto, Eurocrypt, IEEE S&P, CCS, STOC, FOCS, SIAM Journal of Computing, Journal of the ACM, and so on.
His work on position-based cryptography was selected as one of the top 3 works and invited to QIP 2011 as a plenary talk. Nishanth has served on the technical program committee of many of the top cryptography conferences on several occasions and he holds several US Patents. Nishanth received his Ph.D. in Computer Science from UCLA (opens in new tab), M.S. in Computer Science from UCLA, and B.E. in Computer Science and Engineering from Anna University (Hindustan College of Engineering), Chennai.
Dr. Franziska Boenisch is a tenure-track faculty at the CISPA Helmholtz Center for Information Security, where she co-leads the SprintML lab. Her research focuses on private and trustworthy machine learning; during her Ph.D. at Freie Universität Berlin and Fraunhofer AISEC she pioneered the notion of individualized privacy in ML.
Before joining CISPA, she was a Postdoctoral Fellow at the University of Toronto and the Vector Institute. She received an ERC Starting Grant in 2025 for research on privacy in foundation models and has been recognized with the Fraunhofer ICT Dissertation Award (2023), a GI Junior Fellowship (2024), and a Werner‑von‑Siemens Fellowship (2025).