As Artificial Intelligence (AI) systems, mostly deep neural networks, become increasingly important in critical and everyday applications, it is crucial to make them more explainable, secure, and trustworthy. Traditional approaches to AI development often prioritize performance over transparency or robustness, resulting in models that remain difficult to interpret and vulnerable to misuse. That is why the special session Explainability and Security in Trustworthy Artificial Intelligence Systems aims to address these challenges by bringing together researchers and practitioners at the intersection of explainable AI (XAI), AI security, and AI for security.
The session aims to explore how XAI can enhance the usability of AI systems, enabling human stakeholders to make more informed decisions and thereby improving society's trust in such systems. The session will also focus on advances in methods for securing AI against security threats, such as adversarial threats, manipulation, and systemic vulnerabilities. Recognizing that robustness is essential for reliable deployment, it will also emphasize that XAI and security should be perceived as inseparable. The topics of interest include using AI algorithms (for example, Large Language Models) to support explainability and security of other AI algorithms in a cross-model way.
Moreover, the session aims to look beyond solutions created solely for AI algorithms by recognizing that intelligent systems have a dual role, both as a subject needing explainability and security, and as a tool to strengthen the transparency and security of other systems that integrate AI alongside other critical components. In such applications, AI can be used for tasks such as anomaly detection, monitoring, and reasoning. By fostering dialogue across these complementary perspectives, the goal of the special session is to support the design of intelligent systems that serve human needs, while also promoting resilience and transparency in increasingly complex intelligent ecosystems.
Advance and explore state-of-the-art in explainability and AI security to enhance robustness, trust, accountability, as well as informed and reliable human decision-making in AI systems.
Showcase approaches that highlight how explainability and security enhance the reliability of AI in complex, real-world settings and different applications.
Consider AI within larger architectures that include monitoring, anomaly detection, and reasoning components.
Inspire interdisciplinary dialogue to align research on explainability and security toward building safer, more transparent future intelligent systems.
XAI methods for different applications and data modalities.
Methods and frameworks for AI security.
AI methods that increase the overall security and reliability of the systems as a whole.
Real-world use cases in which the reliability and robustness of systems are crucial.
Challenges in the theoretical and practical integration of XAI and AI security, as well as AI for security.
Using large language models and other AI models to improve the security and explainability of other AI systems in a cross-model manner.
Other emerging topics in explainability and security related to AI.
All submissions will be handled through the main WCCI 2026 conference submission system. When uploading their manuscript, authors must select the option indicating that the paper is intended for this Special Session (specific instructions coming soon).
The submission portal will be accessible via the official WCCI 2026 “Information for Authors” page: https://attend.ieee.org/wcci-2026/information-for-authors/
The paper format, page limits, and templates are the same as for the main conference. Authors should follow the standard WCCI 2026 guidelines, including the required IEEE LaTeX/Word templates, as provided on the page above.
More details soon. Hopefully, see you in Maastricht!
Paper submission deadline: 31 January 2026
Notification to authors: 15 March 2026
Camera-ready deadline: 15 April 2026