Representation Learning for Pedestrian Re-identification
Saturday, September 8th, 2018 - TBD (Morning)
Organizers : Liang Zheng, Yang Yang, Shengcai Liao
09:00 – 09:40: A general introduction and overview of person re-identification [40 min]
09:40 – 10:20: The seamless corporation of visual descriptors and similarity metrics [40 min]
10:20 – 10:40: Coffee break [20 min]
10:40 – 11:40: Deep architectures for representation learning and transfer learning [60 min]
11:40 – 12:00: Questions & Discussion [20 min]
The task of person re-identification aims to find a queried person in a large database of pedestrian images, so that the person-of-interest can be located across cameras. This task underpins critical research and application significance, and in recent years has received fast increasing attention from both the academia and industry. Traditionally, person re-identification is featured by effective combinations of visual descriptors and similarity metrics. At present, the research frontier has been advanced to the deeply learned invariant feature embeddings which are both discriminative and efficiency friendly. Moreover, many research tasks have been introduced, such as video-based, language-based, and detection-informed re-identification. The rich scientific possibilities thus have given rise to a prime of person re-identification research.
In this context, this tutorial targets at bringing together the current research advances and discussing the state-of-the-art methods in representation learning for person re-identification. The tutorial will review the traditional research initiatives in this area, present an overview of the current frontier and transfer learning methods, and finally discuss the possible future research directions.
Through this tutorial, audience will not only have a more comprehensive knowledge of person re-identification, but also gain a research vision that may expand their own research capacities.