We invite AI researchers and practitioners across different disciplines and knowledge backgrounds to submit contributions dealing with the following (or related) topics:
Adverse Applications of Generative AI and Machine Learning (ML):
AI-driven online deception, misinformation/disinformation, and social manipulation
Jailbreaks, prompt/data injection, and scalable content automation using LLMs
Fraud, impersonation, and social engineering attacks powered by generative models
Mitigation Strategies and Safety Mechanisms for AI/ML:
Threat modeling, risk assessment, and adversarial testing of ML systems
Explainable AI (XAI), human-in-the-loop systems, and oversight frameworks
Safety guardrails for LLMs and multimodal AI systems
Retrieval-Augmented Generation (RAG) reliability and hallucination mitigation
Behavioral modeling and intrusion detection
Privileged Access Management (PAM) and privilege creep detection
Evaluation and Oversight of Generative and Agentic AI Systems:
Containment, alignment, and interpretability of generative models
Benchmarking, scenario libraries, and simulation testbeds
Post-deployment monitoring, incident reporting, and documentation
Secure agent architectures and compliance in high-risk AI domains
Technical and Societal Impacts of AI and ML Systems:
Privacy breaches, data leakage, and model inversion attacks
Bias, discrimination, and representational harms in ML
Psychological and societal effects of Human-AI interaction
Trust erosion and polarization in digital ecosystems
Legal, Ethical, and Governance Considerations in AI:
Intellectual property and content ownership
Regulatory compliance for AI systems (e.g., GDPR, AI Act)
AI in surveillance and law enforcement
Responsible AI, transparency, and auditing mechanisms
Special topics of interest:
Misuse & Societal Harms of Generative AI: Deepfakes and synthetic voice; Misinformation/disinformation; Scam automation and social engineering; Privacy leakage; Bias and representational harms; Accessibility and language equity; Harms in high-risk domains
Safety, Evaluation & Governance of Generative Systems: Jailbreaks, prompt/data injection, containment; hallucination mitigation; RAG reliability; post-deployment monitoring; incident reporting; human oversight; alignment; interpretability; safety cases; auditing; compliance processes for high-risk GenAI.
We welcome submissions spanning the full range of theoretical and applied work, including user research, methods, datasets, tools, simulations, demos, and practical evaluations.
Submissions should be 7 pages for full technical papers, 4 pages for short papers or demos, and 2-3 pages for position papers, including references in double columns. Papers should be formatted according to the ACM Companion Proceedings. Templates (Word and Latex) can be found here: https://www.acm.org/publications/proceedings-template
Submissions will be peer-reviewed by 3-4 members of the program committee in a single-blind process.
Submission Deadline: December 28, 2025 - Time: AOE
Notification: January 13, 2026 - Time: AOE
Submission Link: https://easychair.org/conferences?conf=aiofai2026