2D Symmetry

Submissions

2D.1 and 2 Reflection and Rotation Symmetry Detection

Reflection and rotation symmetry detection are mid-level visual tasks that are a fundamental part of human vision. People detect these symmetries effortlessly; however, machines have historically struggled at these tasks. Symmetry aids humans in segmenting objects, foreground and background detection, and in many other tasks and can be useful in computer vision for image understanding, object detection, etc.

Examples of the Reflection and Rotation Datasets.

Examples of State-of-the-Art detection on the Sym-COCO (image from [Funk and Liu arXiv 2017]).

There are 3 different competitions with different output formats and evaluation criteria.

A. Reflection Symmetry Detection [Test Images and Test Toolbox Link]:

  • Your algorithm should output reflection symmetry axes where each axis is defined by a line segment (two points) and the strength of the symmetry.
  • Evaluation is the angle difference and the distance of the detected from the center c to the ground-truth line segment (same as the previous symmetry competition [Liu et al. 2013]).
  • Check out the detailed instructions in the evaluation toolbox for more information [link].

B. Rotation Symmetry Detection [Training Images Link]:

  • Your algorithm should output the rotation symmetry centers with the location (one point) of each center and the strength of the symmetry.
  • Evaluation is the euclidean distance between the detected and ground truth rotational symmetry centers (same as the previous symmetry competition [Liu et al. 2013]).
  • Check out the detailed instructions in the evaluation toolbox for more information [link].

C. Sym-COCO (containing reflection [Training Images Link] and rotation [Training Images Link] symmetry labels):

  • Your algorithm should output the 2D symmetry heatmap in the same size as the input image. Look at [Funk and Liu arXiv 2017] for more information.
  • Evaluation is the difference between the detected symmetry heatmaps and the GT symmetry. Look at [Funk and Liu arXiv 2017] for more information.
  • Check out the detailed instructions in the evaluation toolbox for more information [link].


In order to submit to these competitions, you will need to include:

  • Two matlab functions, one that loads your model and another that runs your algorithm on the image. Check out the corresponding toolbox for more information.
  • A 4-page report describing your method. Check out the submission page for more information.

References:

  • Funk, Christopher, and Yanxi Liu. "Beyond Planar Symmetry: Modeling human perception of reflection and rotation symmetries in the wild." arXiv preprint arXiv:1704.03568 (2017).
  • Funk, Christopher, and Yanxi Liu. "Symmetry ReCAPTCHA." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5165-5174. 2016.
  • Lee, Seungkyu, and Yanxi Liu. "Curved glide-reflection symmetry detection." IEEE transactions on pattern analysis and machine intelligence 34, no. 2 (2012): 266-278.
  • Lee, Seungkyu, and Yanxi Liu. "Skewed rotation symmetry group detection." IEEE transactions on pattern analysis and machine intelligence 32, no. 9 (2010): 1659-1672.
  • Liu, Jingchen, George Slota, Gang Zheng, Zhaohui Wu, Minwoo Park, Seungkyu Lee, Ingmar Rauschert, and Yanxi Liu. "Symmetry detection from realworld images competition 2013: Summary and results." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 200-205. 2013.
  • Liu, Yanxi, Hagit Hel-Or, Craig S. Kaplan, and Luc Van Gool. "Computational symmetry in computer vision and computer graphics." Foundations and Trends® in Computer Graphics and Vision 5, no. 1–2 (2010): 1-195.Contacts: Christopher Funk.
  • Treder, Matthias Sebastian. "Behind the looking-glass: A review on human symmetry perception." Symmetry 2, no. 3 (2010): 1510-1543.

Contacts: Christopher Funk, Seungkyu Lee.

2D.3 and 4 Translation (1/2) Symmetry Detection

These competitions are to detect transitional symmetry in the real-world in either 1D (frieze) or 2D (wallpaper) repeating patterns. Understanding how these pattern repeat can help in:

Façade Detection

Arial to Street matching

3D reconstruction

For these challenges, you will be detecting the patterns on real-world images. You can submit to either or both of the competitions.

The evaluation criteria will be the same as the previous symmetry competition [Liu et al. 2013].

In order to submit to these challenges, you will need to include:

  • A matlab function called run.m function which takes an image and a filename where write the output in the .lat format (same as training annotations). The function should run your algorithm on the image and write the lattice to the file. Check out the detailed instructions in the evaluation toolbox for more information [link].
  • A 4-page report describing your method. Check out the submission page for more information.

References:

  • Liu, Jingchen, George Slota, Gang Zheng, Zhaohui Wu, Minwoo Park, Seungkyu Lee, Ingmar Rauschert, and Yanxi Liu. "Symmetry detection from realworld images competition 2013: Summary and results." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 200-205. 2013.
  • Liu, Jingchen, and Yanxi Liu. "Local regularity-driven city-scale facade detection from aerial images." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3778-3785. 2014.
  • Liu, Yanxi, Robert T. Collins, and Yanghai Tsin. "A computational model for periodic pattern perception based on frieze and wallpaper groups." IEEE transactions on pattern analysis and machine intelligence 26, no. 3 (2004): 354-371.
  • Liu, Yanxi, Hagit Hel-Or, Craig S. Kaplan, and Luc Van Gool. "Computational symmetry in computer vision and computer graphics." Foundations and Trends® in Computer Graphics and Vision 5, no. 1–2 (2010): 1-195.
  • Park, Minwoo, Kyle Brocklehurst, Robert T. Collins, and Yanxi Liu. "Deformed lattice detection in real-world images using mean-shift belief propagation." IEEE Transactions on Pattern Analysis and Machine Intelligence 31, no. 10 (2009): 1804-1816.
  • Park, Minwoo, Kyle Brocklehurst, Robert Collins, and Yanxi Liu. "Translation-symmetry-based perceptual grouping with applications to urban scenes." Computer Vision–ACCV 2010 (2011): 329-342.
  • Park, Minwoo, Kyle Brocklehurst, Robert Collins, and Yanxi Liu. "Image de-fencing revisited." Computer Vision–ACCV 2010 (2011): 422-434.
  • Wolff, Mark, Robert T. Collins, and Yanxi Liu. "Regularity-driven facade matching between aerial and street views." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1591-1600. 2016.

Contacts: Christopher Funk.

2D.5 Medial axis detection

The medial axis transform (MAT) is a powerful shape abstraction that has found application both in computer vision and graphics, for tasks such as:

Object and pose recognition

Shape deformation with volume preservation

Mesh simplification

For natural images, the task of medial point detection amounts to detecting the locations of medial points of an object or other locally symmetric structure in an image. The set of these points is an approximation of the medial axis or skeleton of the object. Potential applications include:

Centerline detection for aerial and medical images

Painterly rendering

Interactive segmentation

For this challenge we consider two different flavors of the medial point detection task:

    • Object skeleton detection (only medial axes of foreground objects are considered). For this challenge we use the SK-LARGE dataset.
    • Generic medial axis detection (no distinction between foreground objects and background structures). For this challenge we use the BMAX500 dataset.


To take part in the challenge do the following:

    1. Download the respective dataset (BMAX500 or SK-LARGE).
    2. Download the package with the evaluation code.
    3. Submit two Matlab functions: one that loads your model and one that returns a medial point probability map or a binary medial point map. Check the detailed instructions in the testMedialPointDetection.m and README.md files.
    4. Do not forget to also submit the 4-page report describing your method, or cite an already published related work. More information in Submission.

References:

Contacts: Wei Shen, Stavros Tsogkas.