We invite AI researchers and practitioners across different disciplines and knowledge backgrounds to submit contributions dealing with the following (or related) topics:
Adverse Applications of Generative AI and Machine Learning (ML):
AI-driven online deception, misinformation/disinformation, and social manipulation
Jailbreaks, prompt/data injection, and scalable content automation using LLMs
Fraud, impersonation, and social engineering attacks powered by generative models
Mitigation Strategies and Safety Mechanisms for AI/ML:
Threat modeling, risk assessment, and adversarial testing of ML systems
Explainable AI (XAI), human-in-the-loop systems, and oversight frameworks
Safety guardrails for LLMs and multimodal AI systems
Retrieval-Augmented Generation (RAG) reliability and hallucination mitigation
Behavioral modeling and intrusion detection
Privileged Access Management (PAM) and privilege creep detection
Evaluation and Oversight of Generative and Agentic AI Systems:
Containment, alignment, and interpretability of generative models
Benchmarking, scenario libraries, and simulation testbeds
Post-deployment monitoring, incident reporting, and documentation
Secure agent architectures and compliance in high-risk AI domains
Human-Centered Modeling, Personalization, and Trust in Adaptive AI Systems:
User modelling and personalisation in adaptive systems
Explainable and trustworthy AI for user-centric applications
Incentive mechanisms and persuasive technologies
Trust and reputation systems in multi-agent and peer-to-peer networks
Technical and Societal Impacts of AI and ML Systems:
Privacy breaches, data leakage, and model inversion attacks
Bias, discrimination, and representational harms in ML
Psychological and societal effects of Human-AI interaction
Trust erosion and polarization in digital ecosystems
Legal, Ethical, and Governance Considerations in AI:
Intellectual property and content ownership
Regulatory compliance for AI systems (e.g., GDPR, AI Act)
AI in surveillance and law enforcement
Responsible AI, transparency, and auditing mechanisms
Special topics of interest:
Misuse & Societal Harms of Generative AI: Deepfakes and synthetic voice; Misinformation/disinformation; Scam automation and social engineering; Privacy leakage; Bias and representational harms; Accessibility and language equity; Harms in high-risk domains
Safety, Evaluation & Governance of Generative Systems: Jailbreaks, prompt/data injection, containment; hallucination mitigation; RAG reliability; post-deployment monitoring; incident reporting; human oversight; alignment; interpretability; safety cases; auditing; compliance processes for high-risk GenAI.
We welcome submissions spanning the full range of theoretical and applied work, including user research, methods, datasets, tools, simulations, demos, and practical evaluations.
Submissions should be 7 pages for full technical papers, 4 pages for short papers or demos, and 2-3 pages for position papers, including references in double columns. Papers should be formatted according to the ACM Companion Proceedings. Templates (Word and Latex) can be found here: https://www.acm.org/publications/proceedings-template
Submissions will be peer-reviewed by 3-4 members of the program committee in a single-blind process.
Submission Deadline: December 28, 2025 - Time: AOE
Notification: January 13, 2026 - Time: AOE
Submission Link: https://easychair.org/conferences?conf=aiofai2026
Accepted papers published in the ACM proceedings may be subject to ACM’s Open Access publishing policy and an Article Processing Charge (APC).
If an author’s institution participates in ACM Open, the APC may be covered by the institution (no direct cost to authors).
Otherwise, authors may need to pay an APC. For many ACM conferences, the subsidized APC is typically $250 (ACM/SIG members) or $350 (non-members), depending on eligibility and ACM policy.
Official policy details: ACM policy page (https://www2026.thewebconf.org/about/acm-policy.html)