Grants

We are very pleased to be awarded a research project "Generative AI challenge: A Tracking-Discriminating Deep Generative Model for Real-Time Anomaly Detection in Videos " from Airbus Endeavr Wales Group Ltd, Airbus Defence and Space, UK, to explore advanced generative AI models in the context of live streams with complicated scenarios. The project period is June 2023-May 2024 (inclusive), and the amount is £156,600 GBP.

Project overview: 

We delve into the realm of video generation and editing, leveraging advanced generative AI paradigms. Comprehensive exploration has been conducted, encompassing both theoretical foundations and algorithmic developments.


Our work unfolds along two distinct lines: Multi-Task Assembly Editing (MTAE) and Generation based Editing (GenE). MTAE harnesses the collective operations of various visual sub-tasks, including detection, tracking, segmentation, generation, and inpainting, to accomplish targeted video editing. On the other hand, GenE employs generative models to facilitate desirable editing during the creation of a new counterpart video. Each approach carries its own set of advantages and drawbacks.

Multi-Task Assembly Editing (MTAE) 

MTAE excels in local editing scenarios, where specific objects or backgrounds are detected, tracked, segmented, generated, and replaced directly within a given input video. This approach yields results that are intuitive and photorealistic. 

Generation based Editing (GenE)

GenE extends beyond local editing by enabling a broader range of editing possibilities, such as global style changes, structural modifications, layout adjustments, and more.