Title: Preparing for the Emerging Criminal Threats from Generative AI
Abstract: Generative Artificial Intelligence is creating the opportunity for users to create highly realistic text, images, video and audio using off-the-shelf tools. These tools increasingly have the ability to be used in real-time, interactive settings. This raises the challenge of new types of automated criminal and security threats at scale. In this talk we will highlight some of the emerging trends for criminal misuse of generative AI, with a specific focus on threats involving voice-enabled chatbots in conversational settings. We will also present our current research initiative, which is the development of a generative AI test range that can enable research on testing, detecting and defending against such threats.
Bio: Chris Leckie is a professor in the School of Computing and Information Systems at the University of Melbourne. He has over 30 years of research experience in AI and ML for telecommunications, cyber security and protecting critical infrastructure. He has published more than 300 papers in this field, and his research has been operationally deployed in a variety of industries.
Title: Securing AI: From Agentic AI to Software, Models, and the Physical World
Abstract: The rapid evolution of AI has shifted systems from passive models to agentic AI capable of autonomous reasoning, code generation, and interaction with real-world environments. While these advances enable powerful applications, they fundamentally challenge existing assumptions in security, software engineering, and governance. This talk synthesises recent research that reframes AI security as a cross-layer lifecycle problem, spanning learning models, LLM-enabled software, autonomous agents, and cyber-physical systems. We examine how agentic LLMs expand the attack surface, enabling dynamic extraction, evasion of vulnerability detection, and systematic ethical and compliance failures in code generation. At the model level, persistent backdoors and adversarial perception further undermine trust. As AI systems increasingly operate in IoT and safety-critical environments, this talk argues for trustworthiness by design, integrating agent-aware threat models, continuous auditing, and robust learning foundations across the AI lifecycle, and inspire the next generation of research in trustworthy and resilient AI.
Bio: Professor Yang Xiang received his PhD in Computer Science from Deakin University, Australia. He is currently a full professor and the Director of Digital Capability Research Platform, Swinburne University of Technology, Australia. In the past 20 years, he has been working in the broad area of Cybersecurity, which covers software, system, network, and application security. He has published more than 300 research papers in many international conferences and journals in Cybersecurity, such as ACM CCS, IEEE S&P, Usenix Security, NDSS, IEEE TDSC, and IEEE TIFS. He is the Editor-in-Chief of the SpringerBriefs on Cyber Security Systems and Networks. He serves as the Associate Editor of the ACM Computing Surveys. He served as the Associate Editor of IEEE Transactions on Dependable and Secure Computing, IEEE Internet of Things Journal, IEEE Transactions on Computers, and IEEE Transactions on Parallel and Distributed Systems. He is a current member of College of Experts (CoE) of the Australian Research Council (ARC). He is a Fellow of the IEEE.
Title: Private Data Processing in the Quantum Age
Abstract: This talk will examine a common dilemma in today's digitalised world: we want to harness the power of data, but how do we do this without harming data privacy? We will explore ways to allow data processing in a privacy-respecting manner using the power of cryptography, particularly privacy-enhancing technologies. We will also discuss particular challenges in the face of significant advancements in quantum computing and understand what the quantum age means for cryptography and privacy-enhancing technologies.
Bio: Dr Esgin is a cryptography expert and Senior Lecturer at the Faculty of Information Technology, Monash University, Australia. His research delves into all theoretical and practical dimensions of developing practically efficient cryptographic algorithms, focusing particularly on quantum-safe and privacy-enhancing technologies. He has a strong focus on applications of these algorithms to solve real-life problems. Dr Esgin's research has been recognised/funded by prestigious awards and grants including an Amazon Research Award, a Google Research Scholar Award, an ARC Discovery Project grant and the Vice-Chancellor’s Commendation for Thesis Excellence Award for his PhD dissertation at Monash University.