In recent years, there has been a growing interest in the study of 3D shape recognition and analysis. This interest has been facilitated by the availability of accessible 3D digitization devices (e.g., stereoscopic cameras on smartphones) combined with powerful reconstruction algorithms. Similarly to the development of machine learning on images, we are witnessing the release of increasingly large and complex 3D datasets, which can be adapted to various tasks.
To process this vast amount of available data, it is essential to develop retrieval algorithms that are efficient in handling high-resolution 3D scans. At the same time, these methods need to be flexible, allowing for the recognition of different parts of a 3D model based on multi-modal queries (i.e., text prompts, images, or other 3D models). However, most contributions to 3D object retrieval overlook a key aspect of 3D shapes: geometric textures (also known as relief patterns).
The objective of this track is to promote the research of relief pattern analysis methods. We present a main challenge focused on pattern retrieval and an optional challenge on pattern segmentation.
Task
Pattern retrieval involves identifying all 3D models that share at least one relief pattern with those present on the surface of a query model. Once the relevant models have been identified, the shared patterns must be localized on their surfaces. If the query contains multiple relief patterns, all models that exhibit at least one of these patterns must be retrieved. This process is repeated for all models in the query dataset.
Below, we provide a description of the datasets involved in the challenge.
Datasets
Query set: The query set consists of 54 triangulated meshes, generated by applying different combinations of relief patterns to six base surfaces. 40 of these meshes contain two relief patterns on their surface, while the remaining 14 meshes have a single pattern applied across the entire surface. This latter set of 14 meshes is referred to as the simplified dataset. Each query mesh is named after the textures it contains (e.g., 65_21.ply for textures 65 and 21). Moreover, all the models in the query set are sampled with 100,000 vertices.
The six base surfaces are constructed by sculpting, twisting and folding a unit square plane in Blender. All base surfaces are topologically equivalent, but each of them displays exaggerated geometric features such as cavities, folds, and other kinds of occlusions. The reason behind this choice is that 3D pattern analysis of complex shapes still represents an open challenge. Thus, every proposed solution represents an important contribution in the broader field of 3D shape analysis.
There are a total of 14 different relief patterns, selected from a collection of textures curated by Joao Paulo. These textures are chosen to contain both man-made materials, such as stone walls, padded fabrics, and armor scales, and organic textures, such as snake skin and pine bark.
Examples of the chosen textures are shown below:
Fig. 1: Examples of chosen textures.
Participants are required to use all 54 models from this dataset as queries to retrieve all models within a separate retrieval dataset that contain at least one pattern present on the surface of the query model.
If the task proves too challenging, participants will have the option to submit results using only the models from the simplified dataset as queries.
Examples of the generated models from the query set are shown below:
Fig. 2: Examples of models from the query set containing two relief patterns.
Fig. 3: Examples of models from the simplified set.
Retrieval set: The retrieval set consists of 300 triangulated meshes. We selected 15 base models from free datasets available on platforms such as Polyhaven and Sketchfab. The selected models include both common objects, such as pillows, chairs, and vases, as well as objects with non-trivial topologies, such as the torus, the Stanford dragon, and the Utah teapot. The diversity of the selected surfaces allows us to evaluate the capability of the methods participating in the contest to analyze relief patterns regardless of the underlying geometry. To generate the final dataset, we applied various combinations of pairs of relief patterns to each base model. Each final mesh was then simplified to contain between 100,000 and 200,000 vertices.
Training set and ground truth: This track also includes a training set that consists of 700 triangulated meshes. What distinguishes the training dataset from the retrieval dataset is the presence of certain pattern classes that are not shared between the two datasets. The motivation behind this choice is that real-world applications face input objects that probably contain patterns of unknown categories. Thus, it is crucial to evaluate and develop solutions capable of generalizing to unseen data.
Along with the models, the ground truth for the pattern retrieval task is also provided. For each training model, a text file provides per-face annotations, where the i-th row corresponds to the i-th face in the mesh. The row value indicates the pattern label (65, 21, etc.) or 0 if no relevant relief pattern is present.
The base models, along with some examples of the final generated models for the training and retrieval datasets, are shown below:
Fig. 4: Base models used to generate the retrieval/training set.
Fig. 5: Examples of models from the retrieval/training set.
Downloads
You can find a sample of models generated for this track at the link below. The link also includes an example of submitted results for the pattern retrieval task (Example_Submitted_Result_Pattern_Retrieval.txt) and the pattern segmentation task (Example_Submitted_Result_Segmentation.txt). All models are in PLY format and do not contain additional information such as diffuse, normal, or roughness maps.
Task
In addition to the pattern retrieval task, participants are encouraged to propose methods for relief pattern segmentation. Pattern segmentation requires analyzing the characteristics of individual surfaces that contain more than one relief pattern. The goal is to segment each model within a dataset into different patches based on their relief pattern.
To address this challenge, participants are required to segment all 40 models from the query dataset that contain two relief patterns. The surfaces from the simplified dataset, which have only a single geometric texture applied, are naturally excluded from this challenge.
People interested in participating in this track must register by sending an email to Gabriele Paolini (email: gabriele.paolini1@unifi.it) with the subject "SHREC'25 track: Retrieval and Segmentation of Multiple Relief Patterns".
Then, every participant is asked to:
Download the datasets (PLY files).
Run their methods. Each model in the query set must, in turn, be used as a query against all models in the retrieval set. In any case, at least the simplified dataset must be used. Then, for the optional challenge, all the models in the query set must be segmented depending on the category of patterns present on the surface.
Provide by April 7, 2025:
Pattern retrieval challenge: participants should submit a plain ASCII file describing if and where the patterns on a query appear in the models from the retrieval set. Up to 3 files may be submitted, resulting from different algorithms or parameter settings. Each line from the file is structured as follows:
q t labels
where q is the filename of the q-th model in the query set (named according to the patterns on the surface). t is the number of the t-th model in the retrieval set, while labels is a list of white space separated labels assigned at each face of the t-th model. A label at index f is an integer describing whether the corresponding pattern from the query is present on the f-th face of the model t. The labels are therefore represented by integer values, arbitrarily assigned by participants. We suggest to name the resulting file NameParticipant_Ret_runX.txt, where X can be 1, 2, or 3, as for the previous task. An example of submitted file structure is shown below:
02_08 1 [0 0 0 0 0 0 1 1 1 0 0 2 2 2 2 1]
02_08 3 [0 0 0 0 2 2 2 1 1 1 1 0 0 0]
35 9 [0 0 0 0 0 0 0 3 3 3 3]
...
We will use four measures to compute the effectiveness of pattern segmentation algorithms: Hamming Distance, Consistency Error, Rand Index and Weighted Dice Coefficient.
Segmentation challenge: participants should submit a plain ASCII file containing the segmentation masks for the entire query set. Up to 3 files may be submitted, resulting from different algorithms or parameter settings. We expect 40 lines, corresponding to the number of models in the query set. Each line must contain Fq labels separated by a white space, where Fq is the number of faces in the q-th query model. Thus, each label at index f indicates the membership of the f-th face of the surface to a segmented region. We suggest to name such a file NameParticipant_Seg_runX.txt, where X can be 1,2, or 3.
We will use the same 4 evaluation measures as in the previous task.
Examples of submitted results for both tasks can be found in the aforementioned downloadable archive.
In addition, participants must report the following information:
System specification: CPU (model, speed in GHz, number of CPU's, RAM per CPU in GB). In case participants use GPU, we require model, speed in MHz, memory in GB and number of GPU's.
Processing time in seconds: For both tasks, participants should distinguish between offline processing (e.g., dictionary computation in BoF approaches or neural network training) and online processing (e.g., the time required to compute the segmentation masks). Please provide the average inference times.
Further information
The evaluations will be done automatically.
The track results will be combined into a joint paper, to be published in Computer & Graphics.
The description of the tracks and their results will be presented at Eurographics 2025 Symposium on 3D Object Retrieval, 4-5 September 2025.
The following list is a step-by-step description of the activities:
March 7 to 16, 2025: A reduced size dataset will be released on this page to participants.
March 17 to 30 March 17 to April 7, 2025: The complete datasets are available and the participants start to run their methods. Within this period, participants are required to submit their results by the end of this period.
March 31 to April 7 April 7 to 14, 2025: Each track participants should provide a description of the methods proposed.
April 7 to 21 April 14 to 21, 2025: The track organizers will prepare the draft of the paper with the submitted methods and the analysis of the proposed methods.
April 21 to 28, 2025: The organizers circulate a draft of track paper for feedback.
April 30, 2025: The track paper is submitted for review. Upon acceptance, papers will be published on the international journal Computer & Graphics.
Gabriele Paolini, Media Integration and Communication Center, University of Florence, Florence, Italy.
Claudio Tortorici, Technology Innovation Institute, Abu Dhabi, United Arab Emirates.
Stefano Berretti, Media Integration and Communication Center, University of Florence, Florence, Italy.
For additional information, please do not hesitate to contact Gabriele Paolini (email: gabriele.paolini1@unifi.it).