IAPR Technical Committee 2 (TC2)
Structural and Syntactical Pattern Recognition
Structural and Syntactical Pattern Recognition
This month's research spotlight features Prof. Battista Biggio, one of the pioneers of adversarial machine learning and a leading researcher in AI security.
Introduce yourself.
My name is Battista Biggio, and I am a Professor of Computer Engineering at the Department of Electrical and Electronic Engineering of the University of Cagliari, Italy. I am the research co-director of AI Security at the sAIfer Lab, where our mission is to develop machine learning systems that are secure, trustworthy, and resilient to attacks. My research lies at the intersection of machine learning and cybersecurity, with a focus on understanding and addressing the vulnerabilities of AI systems when deployed in adversarial environments.
I currently serve as Associate Editor-in-Chief of Pattern Recognition (Elsevier), and previously chaired IAPR TC1 (Technical Committee on Pattern Recognition Theory and Applications) from 2016 to 2020. I am a Fellow of the IEEE, a Senior Member of the ACM, and a member of IAPR and ELLIS. Over the years, I have coordinated and contributed to more than ten research projects, including the recent Horizon Europe projects ELSA, Sec4AI4Sec, and CoEvolution. I also regularly serve as Area Chair for top-tier machine learning and computer security conferences such as NeurIPS and the IEEE Symposium on Security and Privacy.
How did you start your research in pattern recognition and machine learning security?
My interest in pattern recognition and machine learning began during my undergraduate studies in electronic engineering. As I progressed through my M.Sc. and Ph.D. programs, I became increasingly intrigued by the intersection of machine learning and security—specifically, the vulnerabilities of learning algorithms when exposed to malicious inputs. It was, however, only during my Ph.D. that I began exploring what would later become a central theme of my career: the vulnerability of machine learning models to adversarial attacks.
In 2012, together with my colleagues, we published one of the first works demonstrating gradient-based poisoning attacks against support vector machines—a widely used machine learning model at the time. We showed that by manipulating just a small fraction of training points, an attacker could subvert the entire learning process, significantly reducing the model’s performance on clean test data [1]. This work was later recognized with the prestigious ICML 2022 Test of Time Award, as it laid the foundation for studying vulnerabilities of machine learning models to data poisoning and backdoor attacks.
In 2013, we explored a simpler yet equally impactful setting: assuming the attacker could only modify input samples at test time, without tampering with training data or the model itself. We found that small, carefully crafted perturbations, again optimized via gradient-based methods, were enough to deceive the model into misclassifying inputs. We were the first to demonstrate this effect on simple image classification and malware detection [2]. Then, in 2014, with the discovery of adversarial examples in deep neural networks, the field of AI security literally took off, with tens of thousands of papers published only in the last few years [3].
In summary, our pioneering work revealed that AI systems can be surprisingly brittle when exposed to adversarially crafted inputs—raising serious concerns for their deployment in areas like cybersecurity, biometrics, and other high-stakes applications.
Could you talk a little bit more about your current research in AI security?
Much of my current research focuses on building both a scientific and practical understanding of how AI systems can be attacked and defended. We study a wide variety of threat models, including evasion attacks at test time, poisoning attacks during training, and privacy-related threats. These are not just theoretical challenges—real-world AI systems are already facing adversarial manipulation in domains like malware detection, spam filtering, and online content moderation.
What makes this area particularly challenging—and exciting—is that it lies right at the interface of machine learning and cybersecurity. We need to bring together techniques and insights from both worlds: learning theory, optimization, and robust statistics on one side; threat modeling, attacker incentives, and system security on the other.
At the sAIfer Lab, we’re also investigating how these threats evolve as AI models scale up—especially with the rise of foundation models and generative AI. Ensuring that these systems remain secure, trustworthy, and resilient against malicious prompts, jailbreaks, and input manipulation will be one of the defining challenges of the next decade. I also believe that the solution to this challenge will not come from continuing to try to improve the robustness and security of AI models in isolation. It will rather come from embracing a more system-level perspective—namely, focusing on the security of the entire system architecture/design, and not just on that of the individual AI components.
Any message to the readers?
AI is no longer confined to academic research—it’s embedded in products and services we interact with every day. That’s why AI security is not optional—it’s essential.
My message to researchers and practitioners is: don’t treat machine learning as a black box. Understand how your models work, how they can fail, and how they can be attacked. This is inherently an interdisciplinary effort, and we need contributions from experts in pattern recognition, computer security, systems design, and beyond to build AI systems we can trust.
I also encourage early-career researchers to engage with this growing field. There is still so much to explore, and your contributions could help shape the future of secure and responsible AI.
References
[1] B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. In J. Langford and J. Pineau, editors, 29th Int’l Conf. on Machine Learning, pages 1807–1814. Omnipress, 2012.
[2] B. Biggio, I. Corona, D. Maiorca, B. Nelson, N. Srndic, P. Laskov, G. Giacinto, and F. Roli. Evasion attacks against machine learning at test time. In Machine Learning and Knowledge Discovery in Databases (ECML PKDD), Part III, volume 8190 of LNCS, pages 387–402. Springer Berlin Heidelberg, 2013.
[3] B. Biggio and F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84:317–331, 2018.