Notification: 8 June 2026
Workshop: 1-day workshop within 15-17 August 2026 (at IJCAI/ECAI 2026)
Foundation models and GenAI systems are increasingly embedded into real workflows and systems, but their reliability under distribution shift, adversarial inputs, tool use, and human interaction remains limited. RobustifAI workshop aims to explore and discuss a holistic approach where robustness shall be addressed across (i) model behavior, (ii) operational lifecycle, and (iii) user-facing interaction and governance. The workshop invites contributions spanning theory, algorithms, tools, evaluation, and deployment case studies.
Relevant topic areas (but not limited to)
A. Technical robustness
Robust learning under distribution shift and noisy supervision
Calibration and uncertainty for GenAI, self-monitoring, abstention
Hallucination mitigation, faithfulness, grounding, citation integrity
Robust multimodal generation and reasoning
Adversarial robustness for LLMs and VLMs, prompt injection and jailbreak defense
Robust RAG, retrieval failures, poisoning resistance, provenance
B. Operational robustness
Safe adaptation and fine tuning, continual learning, safe personalization
Runtime monitoring and policy enforcement for agentic tool use
Stress testing, red teaming, systematic failure discovery
Secure deployment, privacy, and governance for GenAI operations
Robust MLOps and assurance evidence pipelines
C. User robustness and human-centered robustness
Robust interaction with diverse users, human-in-the-loop safeguards
Misuse prevention, safety alignment, policy and technical co-design
Fairness and harm mitigation as robustness objectives
Requirements engineering and assurance cases for GenAI systems
D. Neural-symbolic and assurance methods
Neural-symbolic GenAI, constraints, logic-guided generation
Formal specification languages for GenAI behaviors and requirements
Verification, testing, falsification, runtime verification, monitoring
Tooling, benchmarks, and reproducible evaluation protocols
Submission link: https://openreview.net/group?id=ijcai.org/IJCAI-ECAI/2026/Workshop/RobustifAI
Page limit: Authors may submit long papers (7 pages plus unlimited pages of references) or short papers (up to 4 pages plus unlimited pages of references).
Format: A PDF file, using the LaTeX styles or Word template for IJCAI.
Author details: The review process is double blind, so author details should be omitted from the submitted PDF file.
Reviewing: All authors are expected to review 1-2 papers if called upon.
Supplementary material: Authors may submit supplementary material (e.g., appendices, data, source code) as an additional PDF file. Reviewers will not be expected to review this, so please make sure the main paper is self-contained.
Copyright: The workshop proceedings will NOT be archived, so authors retain the copyright and are free to submit the work to other venues after the review process (please do not submit to another venue simultaneously).
If you have any questions about the IJCAI Workshop on RobustifAI, please contact the organizers via the following email
chih-hong.cheng AT uol.de