STAMM-2025
1st International Workshop on Security and Truth in the Era of AI-Generated
Media Manipulation
In Conjunction with
39th AINA-2025 Conference
Open University of Catalonia
Barcelona, Spain
April 9 to April 11, 2025
STAMM-2025
Open University of Catalonia
Barcelona, Spain
April 9 to April 11, 2025
** NEWS ** Special Issue Approved!!!
The authors of chosen papers presented at STAMM-2025 have the opportunity to submit an extended version of their contributions to the Special Issue “Disrupting Truth: Information Disorder in the Age of Generative AI” on Computers and Electrical Engineering (Elsevier).
Download: PDF
As artificial intelligence continues to evolve, its impact on the media landscape has become both transformative and concerning. The ability to manipulate and generate synthetic content at scale has given rise to an era of widespread information disorder, where the authenticity of text, images, and videos is frequently called into question. From deepfakes to AI-generated misinformation, media manipulation presents a pressing global challenge that undermines trust in communication and fuels disinformation campaigns. This workshop focuses on the technological solutions needed to combat these threats and safeguard the integrity of information in the digital age.
STAMM-2025 aims to bring together experts from academia, industry, and research to discuss the latest advancements in AI technology addressing the detection, prevention, and mitigation of media manipulation. The workshop will explore how AI can act both as a catalyst for information disorder and as a solution to its most critical challenges. The event will provide a forum for the exchange of ideas, fostering the development of novel technologies that can restore trust in digital content and protect against AI-driven misinformation.
We invite research contributions that lie at the intersection of AI, media manipulation, and information disorder. Our focus is on solutions that leverage AI to detect manipulated content, defend against adversarial AI techniques, and ensure the authenticity of media within an increasingly fragmented information ecosystem.
AI-driven media manipulation detection and prevention systems
Deepfakes: creation, detection, and mitigation technologies
Multimodal AI approaches for detecting manipulated text, images, and video
Real-Time Detection of Generative AI Manipulated Media
Adversarial AI: attack vectors, security challenges, and defense mechanisms
AI-based content verification, validation, fact-checking, and authentication tools
Quantum computing solutions for enhancing media integrity
Generative Adversarial Networks (GANs) for Media Creation and Security
Blockchain technology for securing digital media authenticity
Federated learning for decentralized detection of manipulated content
Machine learning models for detecting AI-generated misinformation
Advanced natural language processing (NLP) techniques for detecting fake news and misinformation
AI-based image and video forensics for media authenticity verification
Computer vision methods for identifying synthetic and manipulated visuals
Edge computing solutions for distributed AI-based content validation and media integrity
Neural networks for cross-modal detection of manipulated media
Explainable AI methods for transparency in media verification processes
Sentiment analysis for detecting fake news and deceptive content
Paper Submission Deadline: December 25, 2024
Author Notification: January 10, 2025
Author Registration: January 25, 2025
Camera-Ready Paper Submission: January 25, 2025
Authors should submit original research ensuring that they are formatted in accordance with the AINA-2025 guidelines.
Please visit the following web site: AINA-2025 Paper Submission and Publication
Authors of accepted papers, or at least one of them, are requested to register and present their work at the conference (AINA-2025 Registration), otherwise their papers will be removed from the digital library after the conference.
Please do follow the announcement on the AINA 2025 website.
University of Salerno, Italy
University of Salerno, Italy
University of Salerno, Italy
School of Cyber Science and Engineering, Wuhan University, China
Edinburgh Napier University, United Kingdom
Paola Barra - Parthenope University of Naples, Italy
Lucia Cascone - University of Salerno, Italy
Lucia Cimmino - University of Salerno, Italy
Haroon Elahi - Southern University of Science and Technology, China
David Freire-Obregon - University of Las Palmas de Gran Canaria, Spain
Oana Geman - Stefan Cel Mare University, Romania
Fei Hao - Shaanxi Normal University, China
Teresa Murino - University of Naples Federico II, Italy
Matteo Polsinelli - University of Salerno, Italy
Florin V. Pop - University Politehnica of Bucharest, Romania
Imad Rida - University of Applied Sciences for Technology Compiegne, France
Saiyed Umer - Aliah University, India
Alberto Volpe - University of Salerno, Italy
Shaohua Wan - University of Electronic Science and Technology of China, China
Hao Wang - Xidian University, China
Muhammad Umer - The Islamia University of Bahawalpur, Pakistan
The authors of chosen papers presented at STAMM-2025 have the opportunity to submit an extended version of their contributions to the Special Issue “Disrupting Truth: Information Disorder in the Age of Generative AI” on Computers and Electrical Engineering (Elsevier).
The link to the CFP is here.
Chiara Pero
University of Salerno, Italy
cpero@unisa.it