The DeepFashion2 challenge is based on DeepFashion1 and DeepFashion2, which are benchmark datasets proposed to study a wide spectrum of computer vision applications for fashion, including online shopping, personalized recommendation, and virtual try-on, etc . Current techniques are still far from being adopted in real applications. For instance, the accuracy and efficiency of retrieving clothes in numerous commercial images when given a user-taken photo still see a large room for improvement. Therefore, the topics of challenge are being extensively studied in computer vision community by many research groups in both academia and industry. Some challenges of fashion image understanding can be rooted in the gap between the recent benchmark and the practical scenario. For example, the existing largest fashion dataset, DeepFashion, has its own drawbacks such as single clothing item per image, sparse landmark and pose definition (every clothing category shares the same definition of 4 -8 keypoints), and no per-pixel mask annotation.
To address the above drawbacks, we present DeepFashion2, a large-scale benchmark with comprehensive tasks and annotations of fashion image understanding. It is a versatile benchmark of four tasks including clothes detection, landmark estimation, segmentation, and retrieval. It has 801K clothing items where each item has rich annotations such as style, scale, viewpoint, occlusion, bounding box, dense landmarks and masks. There are also 873K Commercial-Consumer clothes pairs. It is the most comprehensive benchmark of its kinds to date. Specifically, we host two of the four challenges included in DeepFashion2 dataset in this workshop:
1. The top three winning teams in each challenge are shown below.
2. The top one winning team of each challenge will be awarded 1000 dollar. The top three winning teams in each challenge will get a certificate.
3. The top three winning teams of each challenge will be invited to present their solutions in our CVFAD 2019 workshop for 10 minutes, 5 minutes and 5 minutes respectively.