IAPR Technical Committee 2 (TC2)
Structural and Syntactical Pattern Recognition
Structural and Syntactical Pattern Recognition
This month's research spotlight features Prof. Jun Liu, Professor (Chair in Digital Health) at the School of Computing and Communications, Lancaster University.
Introduce yourself.
My name is Jun Liu. I am a Professor, Chair in Digital Health in the School of Computing and Communications, Lancaster University, UK. I am also the director of the Vision and Language Group. My research spans artificial intelligence, computer vision, machine learning, and digital health, with a particular interest in developing reliable and impactful AI methods for understanding visual, multimodal, and human-centred data.
I am also fortunate to serve the research community in various capacities. On the editorial side, I currently serve as Associate Editor-in-Chief for Pattern Recognition, alongside editorial roles for several other leading journals. In terms of conference service, I have taken on broader organising roles such as Program Chair of BMVC 2025 and General Chair of BMVC 2026. I am also an executive committee member of the British Machine Vision Association.
How did you start your research in pattern recognition and human behaviour understanding?
My interest in pattern recognition and human behavior analysis began quite naturally during my early research years. I was always fascinated by the question of how machines can understand people, not just as static objects in an image, but as dynamic agents with motion, intention, and interaction. Human behavior felt especially meaningful to me because it is closely connected to how we live, communicate, and collaborate in the real world. It also brings together many aspects of pattern recognition, such as spatial understanding, temporal modelling, and the interpretation of complex visual cues.
As I worked more deeply in this area, I came to realize that progress in human behavior understanding depends not only on better models, but also on stronger benchmarks. At that time, one of the major limitations in the field was the lack of large-scale, diverse datasets for 3D human activity analysis. This motivated me to develop the NTU RGB+D, NTU RGB+D 120, and UAV-Human datasets, with the goal of providing the community with more realistic and comprehensive benchmarks for studying human actions, interactions, and motion patterns. Together, these datasets established a much larger and more diverse benchmark suite for skeleton-based human activity analysis, and they went hand in hand with my broader research on action recognition, early action prediction, pose-based analysis, and human-centred visual understanding. Over time, this body of work became widely adopted across both academia and industry, including by major universities (like MIT, Stanford, Oxford) and leading technology companies (like Google, Microsoft, Meta).
For me, this was especially meaningful because it showed how research in human behavior analysis can shape the direction of a field and influence both academic and industrial innovation. More broadly, my work in this area has continued to evolve around a central question: how can we enable AI systems to better perceive, interpret, and reason about human behaviour in complex real-world settings? This question has motivated much of my research over the years, from action recognition and motion analysis to multimodal understanding and human-centred AI.
Could you talk a little bit more about your current research in human behaviour?
A major direction of my current research is to explore how AI systems can move from understanding the world to engaging with it more effectively. Much of my earlier work focused on helping machines perceive and interpret human behaviour, including actions, motion, and interactions. More recently, at the Vision and Learning Group, we have become increasingly interested in vision-language-action (VLA) models, because they offer a natural way to connect perception, language, and decision-making within a unified framework. In my view, this is an important step toward building AI systems that are not only perceptive, but also responsive and useful in real-world settings.
What I find particularly interesting is that VLA models build on many of the same questions that motivated my earlier work. To work well with people, an AI system needs to understand what a person is doing, what they are trying to achieve, and how the surrounding environment shapes possible actions. This is where human behavior analysis becomes highly relevant. It provides an essential foundation for developing systems that can interpret human intent, follow instructions, and interact with the world in a more informed and adaptive way.
I am especially interested in how these models can support collaboration between humans and AI. Rather than treating AI simply as an autonomous agent, I see strong potential in using VLA models to complement human abilities, assist with complex tasks, and provide context-aware support in domains such as digital health, robotics, and assistive technologies. More broadly, I think this direction opens up new opportunities for creating AI systems that are more interactive, more aligned with human needs, and ultimately more helpful in practice.
Any message to the readers?
AI is developing very quickly, and we are now seeing it move from research labs into many parts of everyday life. As this happens, I believe it is increasingly important to think not only about what AI systems can do, but also about how they can work with people in a useful, reliable, and responsible way.
My message to researchers and practitioners is that we should keep human needs at the centre of AI development. Whether we work on perception, language, action, or multimodal intelligence, the goal should not simply be to build more capable models, but to create systems that can genuinely support people in real-world settings. This requires both technical innovation and careful thinking about impact, trust, and usability.
I would also encourage early-career researchers to stay curious and open to interdisciplinary problems. There is still a great deal to explore, and I believe the next generation of researchers will play an important role in shaping how AI can positively contribute to science and society.