Paper 1: Age classification in Child Sexual Abuse Materials
Michal Kozbial and Mateusz Kowalczyk (NASK National Research Institute, Poland)
Paper 2: Assessing the Reliability of Persona-Conditioned LLMs as Synthetic Survey Respondents
Erika Elizabeth Taday Morocho, Lorenzo Cima, Tiziano Fagni, Marco Avvenuti and Stefano Cresci (IIT-CNR, University of Pisa, University of Florence, Italy)
Paper 3: An Interpretable AI Decision-Support System for Early-Stage Hiring
Vira Filatova, Andrii Zelenchuk and Dmytro Filatov (Applied Artificial Intelligence Covijn Ltd., Artificial Intelligence and Computer Vision Aimech Technologies Corp., United Kingdom, Ukraine, USA)
Paper 4: Prompt Scene Investigation: Uncovering the Evidence in LLMs
Lidaw Fabrice Ledjaki, Yuan-Chen Chang, Mohamed Loutis and Esma Aïmeur (University of Montréal, Canada)
Paper 5: Industrialized Deception: The Collateral Effects of LLM-Generated Misinformation on Digital Ecosystems
Alexander Loth, Martin Kappes and Marc-Oliver Pahl (Frankfurt University of Applied Sciences, IMT Atlantique, UMR IRISA, Germany, France )
Paper 6: Blockchain and DID Integration for an Investigation System
Jyotibha R Chinchankar, Gautham P Kini, Shreya Rao, Rakshitha M M and Niranjan J H (Alva’s Institute of Engineering and Technology, India)
Paper 7: The More You Say, the More You Risk: Ethical Concerns in Large Language Model Reasoning Frameworks
Jinman Zhao, Linbo Cao, Xueyan Zhang, Ken Shi, Yining Wang and Gerald Penn (University of Toronto, University of Waterloo, Canada)
Paper 8: Not Just People Anymore: When AI Agents Become Insiders
Christian Kengne (Polytechnique Montréal, Canada)
Paper 9: DEMO: Ontology-Based Automated Security Analysis for AI Systems and Applications
Jonathan Roy, Lina Rashdan and Fehmi Jaafar (Université du Québec à Chicoutimi, The George Washington University, Canada, USA)