About Me
I’m a Research Scientist at CSIRO’s Data61 (Sydney) working on AI security, deepfake & generative media forensics, and human‑AI collaboration for high‑stakes, real‑world settings (e.g., cybersecurity operations).
Research Vision
I aim to make AI secure, adaptive, and human‑aligned. Practically, that means:
advancing generalizable detection methods that remain robust under distribution shift and novel attack vectors;
designing continual/lifelong learning systems that adapt with minimal retraining; and
building human‑AI teaming frameworks that improve decision quality and accountability in socio‑technical contexts.
My work spans datasets and benchmarks, detection frameworks, and deployment‑focused studies—with publications across top venues such as NeurIPS, KDD, WWW, ACM MM, and ACM Computing Surveys.
Notable Works
Through the Lens — Benchmarking Deepfake Detectors Against Moiré-Induced Distortions (NeurIPS '25)
From Prediction to Explanation — Multimodal, Explainable, and Interactive Deepfake Detection Framework for Non-Expert Users (ACMMM '25)
LLMs in the SOC — An Empirical Study of Human-AI Collaboration in Security Operations Centres (S&P '25)
Alert Fatigue in Security Operations Centres — Research Challenges and Opportunities (ACM Computing Surveys, 2025)
Am I a Real or Fake Celebrity? — Evaluating face recognition & verification APIs under deepfake impersonation (WWW '22)
FakeAVCeleb — a novel audio-video multimodal deepfake dataset” (NeurIPS '21)
One Detector to Rule Them All — Toward a general deepfake attack detection framework (WWW '21)
CoReD — Continual representation with distillation to generalize fake media detection (ACMMM '21)
CL-MPPCA — Detecting anomalies in space using multivariate convolutional LSTM with mixtures of probabilistic PCA (KDD'19)