SHREC 2022 Track:

Sketch-Based 3D Shape Retrieval in the Wild

News

  • 12/07/2022 Our SHREC 2022 track paper has been accepted to Computers & Graphics.

  • 09/03/2022 We have announced the final results.

  • 28/02/2022 We have released the test sets at [Google Drive].

  • 16/02/2022 We have released the evaluation code at [Google Drive] as well as the test protocol (see below).

  • 05/02/2022 We have released the training set for the second task at [Google Drive].

  • 29/01/2022 We have released the training set for the first task at [Google Drive].

  • 15/01/2022 A few sample sketches (100 per category) and 3D models (10 per category) are released at [Google Drive] [Baidu Netdisk].

Introduction

Sketch-based 3D shape retrieval (SBSR) [1-3] has drawn a significant amount of attention, owing to the succinctness of free-hand sketches and the increasing demands from real applications. It is an intuitive yet challenging task due to the large discrepancy between the 2D and 3D modalities.


To foster the research on this important problem, several tracks focusing on related tasks have been held in the past SHREC challenges, such as [4-7]. However, the datasets they adopted are not quite realistic, and thus cannot well simulate real application scenarios. To mimic the real-world scenario, the dataset is expected to meet the following requirements. First, there should exist a large domain gap between the two modalities, i.e., sketches and 3D shapes. However, current datasets unintentionally narrow this gap by using projection-based/multi-view representations for 3D shapes (i.e., a 3D shape is manually rendered into a set of 2D images). In this way, the large 2D-3D domain discrepancy is unnecessarily reduced to the 2D-2D one. Second, the data themselves from both modalities should be realistic, mimicking the real-world scenario. More specifically, we need a full variety of sketches per category as real users possess various drawing skills. As for 3D shapes, we need to frame 3D models with real-world settings more than create them artificially. However, human sketches on existing datasets tend to be semi-photorealistic drawn by experts and the number of sketches per category is quite limited; in the meantime, most current 3D datasets used in SBSR are composed of CAD models, losing certain details compared to the models scanned from real objects.


To circumvent the above limitations, this track proposes a more realistic and challenging setting for SBSR. On the one hand, we adopt highly abstract 2D sketches drawn by amateurs, and at the same time, bypass the projection-based representations for 3D shapes by directly adopting and representing 3D point cloud data. On the other hand, we adopt a full variety of free-hand sketches with various samples per category, as well as a collection of realistic point cloud data framed from indoor objects. Therefore, we name this track ‘sketch-based 3D shape retrieval in the wild’ (SBSRW). As stated above, the term ‘in the wild’ is reflected in two perspectives: 1) The domain gap between the two modalities is realistic as we adopt sketches of high abstraction levels and 3D point cloud data. 2) The data themselves mimic the real-world setting as we adopt a full variety of sketches and 3D point clouds captured from real objects.

Benchmark Overview

To fulfill our goal of SBSR in the wild, our benchmark takes advantage of four existing 2D/3D datasets, including a large-scale 2D sketch collection, i.e., QuickDraw [8], and three 3D point cloud datasets, i.e., ModelNet40 [9], ShapeNet [10], and ScanObjectNN [11].

Tasks

We proposed two tasks to evaluate the performance of different SBSR algorithms, i.e., sketch-based 3D CAD model (point cloud data) retrieval and sketch-based realistic scanned model (point cloud data) retrieval.


For the first task, we select around 46,000 3D CAD models from 44 classes on ModelNet40/ShapeNet and an average of 2,700 sketches from each corresponding category on QuickDraw. We randomly select 80% sketches from each class for training, and the remaining 20% sketches per class are used for testing/query. All the 3D point clouds as a whole are utilized as the target/gallery dataset to evaluate the retrieval performance. Participants are asked to submit the results on the test datasets.


For the second task, we select around 1,700 realistic 3D models from 10 classes on ScanObjectNN and an average of 2,500 sketches per class from QuickDraw. Similar to the first task, we randomly select 80% sketches from each class for training, and the remaining 20% sketches per class are used for testing/query. All the 3D point clouds as a whole are utilized as the target/gallery dataset to evaluate the retrieval performance. Participants are asked to submit the results on the test datasets.

Evaluation Method

For a comprehensive evaluation of different algorithms, we employ the following widely-adopted performance metrics in SBSR, including nearest neighbor (NN), first tier (FT), second tier (ST), E-measure (E), discounted cumulated gain (DCG), mean average precision (mAP), and precision-recall (PR) curve. We will provide the source code to compute all the aforementioned metrics.

Procedure

The following list is a step-by-step description of the activities:

  • The participants register the track by sending an email to qinjiebuaa@gmail.com with 'SHREC 2022 - SBSRW Track Registration' as the title and indicating which task they are interested in.

  • The organizers release the dataset via their website.

  • The participants submit the distance matrices for the test sets, with one-page descriptions of their methods.

  • Evaluation is automatically performed based on the submitted matrices, by computing all the performance metrics via the official source code.

  • The organizers announce the results and the final rank list of all the participants.

  • The track results are combined into a joint paper, which is subject to a two-stage peer review process. Accepted papers will be published in Computers & Graphics.

  • The description of the track and the results will be presented at Eurographics 2022 Symposium on 3D Object Retrieval (1-2 September 2022).

Test Protocol

  • The organizers release the corresponding test set via their website several days before the submission deadline of each task.

  • For the first task, considering computational efficiency, we randomly select 20~30% of the total 3D models as the target/gallery set. For the second task, all the 3D models are used as the gallery set for retrieval.

  • The participants submit the distance matrices for the test sets before the submission deadline of each task, with one-page descriptions of their methods. The format of the distance matrix can be found here.

  • The participants can submit their results up to 3 distance matrices, resulting from different runs. Each run can be a different algorithm or a different parameter setting.

  • Evaluation is automatically performed based on the submitted matrices, by computing all the performance metrics via the evaluation code.

Schedule

  • January 1: Call for participation.

  • January 15: Release a few sample sketches and 3D models.

  • January 22: Registration deadline.

  • January 29: Release the training set for the first task.

  • February 5: Release the training set for the second task.

  • February 28: Submission deadline for the first task. Extended to March 4.

  • March 4: Submission deadline for the second task.

  • March 8: Release the final results for both tasks; jointly write the track report.

  • March 15: Submission deadline for the joint paper for C&G review.

Results

无标题电子表格

References

[1] B. Li, Y. Lu, A. Godil, et al. A Comparison of Methods for Sketch-Based 3D Shape Retrieval. Computer Vision and Image Understanding, 119:57–80, 2014.

[2] J. Chen, J. Qin, L. Liu, et al. Deep Sketch-Shape Hashing with Segmented 3D Stochastic Viewing. IEEE Conference on Computer Vision and Pattern Recognition, 2019.

[3] J. Chen and Y. Fang. Deep Cross-modality Adaptation via Semantics Preserving Adversarial Learning for Sketch-Based 3D Shape Retrieval. European Conference on Computer Vision, 2018.

[4] B. Li, Y. Lu, A. Godil, et al. SHREC’13 Track: Large Scale Sketch-Based 3D Shape Retrieval. Eurographics Workshop on 3D Object Retrieval, 2013.

[5] B. Li, Y. Lu, C. Li, et al. SHREC’14 Track: Extended Large Scale Sketch-Based 3D Shape Retrieval. Eurographics Workshop on 3D Object Retrieval, 2014.

[6] B. Li, Y. Lu, F. Duan, et al. SHREC'16 Track: 3D Sketch-Based 3D Shape Retrieval. Eurographics Workshop on 3D Object Retrieval, 2016.

[7] J. Yuan, B. Li, Y. Lu, et al. SHREC'18 Track: 2D Scene Sketch-Based 3D Scene Retrieval. Eurographics Workshop on 3D Object Retrieval, 2018.

[8] D. Ha and D. Eck. A Neural Representation of Sketch Drawings. International Conference on Learning Representations, 2018.

[9] Z. Wu, Y. Xiong, S. Yu, et al. Unsupervised Feature Learning via Non-Parametric Instance Discrimination. IEEE Conference on Computer Vision and Pattern Recognition, 2018.

[10] A. X. Chang, T. Funkhouser, L. Guibas, et al. ShapeNet: An Information-Rich 3D Model Repository. arXiv preprint arXiv:1512.03012, 2015.

[11] M. A. Uy, Q.-H. Pham, B.-S. Hua, et al. Revisiting Point Cloud Classification: A New Benchmark Dataset and Classification Model on Real-World Data. IEEE International Conference on Computer Vision, 2019.

[12] S. Dey, P. Riba, A. Dutta, et al. Doodle To Search: Practical Zero-Shot Sketch-Based Image Retrieval. IEEE Conference on Computer Vision and Pattern Recognition, 2019.

[13] P. Sangkloy, N. Burnell, C. Ham, et al. The Sketchy Database: Learning to Retrieve Badly Drawn Bunnies. ACM SIGGRAPH, 2016.

[14] M. Eitz, J. Hays and M. Alexa. How do Humans Sketch Objects? ACM SIGGRAPH, 2012.

Organizers

Jie Qin

NUAA, China

Jiaxin Chen

BUAA, China

Boulbaba Ben Amor

IMT Nord Europe & IIAI

Yi Fang

NYUAD & NYU

Contact: qinjiebuaa@gmail.com