Submission Deadline Extended: 20 July, 2025 (11:59 PM AOE)
The 3D-Sec: Deepfake, Deception, and Disinformation Security Workshop at ACM CCS 2025 aims to address the escalating security risks posed by AI-driven misinformation, digital deception, and synthetic media. With the rapid advancement of Generative AI, Large Language Models (LLMs), and adversarial machine learning, attackers now have access to highly realistic deepfakes, automated disinformation pipelines, and AI-powered deception techniques. These capabilities have led to a surge in AI-enhanced fraud, impersonation, and cyber warfare, threatening trust, security, and governance on a global scale.
The proliferation of deepfakes, AI-generated misinformation, and synthetic personas has already been exploited for political manipulation, financial scams, social engineering attacks, and cyber-enabled warfare. Sophisticated AI models can now fabricate convincing narratives, generate realistic fake media, and manipulate public discourse at an unprecedented scale, making detection and mitigation increasingly challenging. The weaponization of Generative AI in cybercrime, nation-state operations, and large-scale influence campaigns underscores the urgent need for robust countermeasures, forensic tools, and security frameworks to safeguard digital ecosystems.
❖ Deepfake Generation and Detection for Cybersecurity Applications
❖ Deepfake Forensics and Adversarial Robustness of Detectors
❖ Cyber Threat Modeling for AI-Generated Media Manipulation
❖ Deepfake Phishing and Impersonation Attacks
❖ Automated Video/Audio Spoofing for Fraud And Cybercrime
❖ Defensive AI Techniques For Detecting Synthetic Media in Cyberattacks
❖ AI-generated Propaganda and Cybersecurity Risks
❖ LLMs in Cyber Warfare, Automated Fake News, and Disinformation Amplification
❖ Computational Approaches to Detecting Manipulated Narratives
❖ Security Frameworks for Detecting and Mitigating AI-Generated Disinformation
❖ Network Analysis of AI-Driven Disinformation Campaigns in Cyberattacks
❖ Cybercrime and Legal Aspects of AI-generated Disinformation
❖ Adversarial AI for Social Engineering, Scams, and Automated Fraud
❖ Deepfake-Enhanced Phishing and Business Email Compromise (Bec) Attacks
❖ AI-Powered Deception for Cybersecurity Red Teaming
❖ Metrics for Assessing Deception and Manipulation in Cyber Operations
❖ AI-driven disinformation in Cyber-Espionage and Nation-State Attacks
❖ Countermeasures and Detection Strategies for Adversarial Deception
❖ LLM-powered Phishing, Impersonation, and Fraud Detection
❖ Prompt Injection Attacks and Adversarial Manipulation Of LLMs
❖ Automated Misinformation Campaigns Using LLM-generated Narratives
❖ Security Risks of AI-generated Social Engineering and Disinformation Bots
❖ LLM-based Malware, Code Obfuscation, And Automated Cyberattacks
❖ Digital Provenance And Watermarking For LLM-generated Content Verification
❖ Adversarial Attacks on Deepfake and LLM-based Security Systems
❖ AI-based Threat Intelligence for Detecting AI-generated Cyberattacks
❖ Watermarking and Content Provenance Verification For AI-generated Media
❖ Human-AI Collaboration in Detecting AI-generated Threats in SOCs
❖ Forensic Techniques for Attribution of Synthetic Media in Cybersecurity Incidents
❖ Robust Authentication and Identity Verification against AI-generated Attacks