Paper Track

Accepted Papers

  • MixNMatch: Multifactor Disentanglement and Encoding for Conditional Image Generation. Yuheng Li, Krishna Kumar Singh, Utkarsh Ojha and Yong Jae Lee. [pdf]
  • Deep Metric Learning with Multi-Objective Functions. Juyoung Lee and Myung-Cheol Roh. [pdf]
  • An Effective Pipeline for a Real-world Clothes Retrieval System. Byungsoo Ko, Yang-Ho Ji, HeeJae Jun, Jongtack Kim, Youngjoon Kim, Insik Kim, Hyong-Keun Kook, Jingeun Lee, Sanghyuk Park and Sangwon Lee. [pdf]
  • Hybrid Style Siamese Network: Incorporating style loss in complimentary apparels retrieval. Mayukh Bhattacharyya and Sayan Nag. [pdf]
  • 3D Reconstruction of Clothes using a Human Body Model and its Application to Image-based Virtual Try-On. Matiur Rahman Minar, Thai Thanh Tuan, Heejune Ahn, Paul Rosin and Yu-Kun Lai. [pdf]
  • Cross-Modal Fashion Product Search with Transformer-Based Embeddings. Muhammet Bastan, Arnau Ramisa and Mehmet Tek. [pdf]
  • From Paris to Berlin: Discovering Fashion Style Influences Around the World. Ziad Al-Halah and Kristen Grauman. [pdf]
  • Domain Adaptive Hard Example Mining for Fashion Instance Retrieval. Aixi Zhang, Yongliang Wang, Jiang Yang, Xiaobo Li and Si Liu. [pdf]
  • CP-VTON+: Clothing Shape and Texture Preserving Image-Based Virtual Try-On. Matiur Rahman Minar, Thai Thanh Tuan, Heejune Ahn, Paul Rosin and Yu-Kun Lai. [pdf]
  • Studying Empirical Color Harmony in Design. Samuel P Goree and David Crandall. [pdf]
  • DeepMark++: CenterNet-based Clothing Detection. Alexey Sidnev, Alexander Krapivin, Alexey Trushkov, Ekaterina Krasikova, Maxim Kazakov. [pdf]
  • Main Product Detection with Graph Networks in Fashion. Vacit Oguz Yazici, Longlong Yu and Joost van de Weijer [pdf]
  • ViBE: Dressing for Diverse Body Shapes. Wei-Lin Hsiao and Kristen Grauman. [pdf]


Topics Covered

Topics of the papers include but are not limited to:

  • Fashion or artistic image generation: generating high-quality fashion design or artistic via automated approaches.
  • Clothing landmark estimation: predicting landmarks for each detected clothing item in fashion images.
  • Style recommendation: suggests fashion articles or outfits which complement the style of a particular article or according to the customer’s preference.
  • Cross-domain visual search: robust visual search between different domains, such as street-style photos, catalogue images and art/design sketches.
  • Visual size and fit advice: automatically provide generic or personalized visual size and fit advice to assist the customers in their purchase journey.
  • Virtual try-on/wardrobe: projects fashion articles onto humans/avatars in order to visualize the style and fit of an outfit, generative models for 3D.
  • Body Shape Prediction: infer customers body shape or measurements based on single or multiple images, as well as RGB-D data
  • Automatic article tagging: automatically tag articles with visual features which enables fast inventory logging.
  • Trend analysis and forecast: automated visual style discovery and trend analysis and forecast from social media data with weak supervision.
  • Personal shopping assistant: multi-modal interaction between the customer and an intelligent agent which aims to assist the shopper’s experience, such as style discovery and accurate product search.
  • Efficient fashion search: retrieves images of fashion items that match the online shopper’s query (the query itself can take many forms).
  • Fashion analysis from videos: efficient fashion outfits parsing and retrieval in videos.
  • Novel methods for visual content generation: including new theoretical insights, new techniques for visual content generation, including images, sketches, videos, 3D content, etc.
  • New evaluation metrics: objective metrics used to compare image generation algorithms on the quality and aesthetic value of generated content.
  • Design with humans in the loop: generative design algorithms which aim to explore systems that can augment human’s creative process, with new approaches to human algorithm interaction.
  • Novel applications: novel applications that can potentially bring value to the creative domain.

Submission and Presentation Guidelines

We solicit short papers on developing and applying computer vision techniques that are valuable in the creative domains, with an emphasis on fashion, art and design. Accepted short papers will be linked online at the workshop webpage. The page limit is between two to four pages (including references). We encourage submission of work that has been previously published, and work in progress on relevant topics of the workshop. Accepted papers will be presented at the poster session. One paper will be awarded as the best paper. Manuscripts should follow the CVPR 2020 template and should be submitted through our CMT portal. We will provide CMT submission link soon.

Paper submission Link: https://cmt3.research.microsoft.com/CVFAD2020/

Note to Authors

  • For authors who want to submit their accepted work at this workshop to a different journal or conference, please check their double submission rules thoroughly. The four page limit should comply with the most common conferences on computer vision. However, the authors need to confirm this with the venue they intend to submit the paper to.
  • Please note that our workshop doesn't have a proceeding.
  • The review process is single-blind.
  • Authors can optionally submit supplemental materials for the paper via CMT.
  • For poster session, the poster dimension requirement is the same as the main conference.

Important Dates

  • Paper submission deadline: April 20th April 15th (11:59PM PST)
  • Notification to authors: May 15th (11:59PM PST)