TRICKY 2024

Transparent & Reflective objects In the wild Challenges

September 29th - Afternoon session

Pieter Claesz, Public domain, via Wikimedia Commons

Overview

Exemplified by the results of the COCO, LVIS and BOP challenges, the performance of state-of-the-art methods in object detection, segmentation and pose estimation is rapidly progressing. Their common (explicit or implicit) assumption that objects are Lambertian, i.e., only create a diffuse reflection of light, however, is an oversimplification of the actual visual world. For non-Lambertian objects, made of glass or metal, the specific scene arrangement creates variations in appearance that go beyond mere texture and occlusion changes. For example, objects are not only directly observable but also via reflection or refraction, depending on their relative location to transparent objects; whereas the appearance of specular highlights depends on light and camera location. Depth sensing also assumes a "Lambertian world" and hence fails to correctly measure the distance to transparent objects. The performance of current approaches, independent of the input modality, therefore quickly deteriorates when faced with such tricky scenes.

The 2nd edition of the Transparent & Reflective objects In the wild Challenges (TRICKY) workshop will discuss object classification, detection, tracking, reconstruction, depth and pose estimation from imaging data for such tricky objects to highlight and identify the related challenges in these tasks and advance the state of the art. A major focus will be put on the applicability of methods in unconstrained scenarios, such as natural scene arrangement, mix of Lambertian and non-Lambertian objects, or changing illumination. This will be achieved with a depth estimation challenge as well 5 invited talks. The workshop will also include 6 spotlight talks and up to 12 posters of contributed works to encourage the discussion of novel research directions.

Jump to

Challenge - Monocular Depth from Images of Specular and Transparent Surfaces 

Depth estimation has been intensively studied in Computer Vision for decades. With the establishment of deep learning, modern approaches achieve negligible error rates on traditional depth benchmarks such as KITTI and Middlebury. However, when these methods are tested on datasets containing reflective and transparent objects, their performance degrades significantly.

For this reason, we are organizing a monocular depth estimation challenge employing depth datasets featuring non-Lambertian objects. This challenge aims to foster the community towards developing next-generation monocular depth networks capable of reasoning at a higher level and thus yield accurate, high-quality predictions for challenging objects yet of everyday use.

The challenges are the next iteration of the challenges organized in collaboration with the NTIRE workshop at CVPR 2023/2024.

The challenge is organized according to the following timeline:

More details on the challenge can be found at https://cvlab-unibo.github.io/booster-web/tricky24.html

Paper Submission

Call for Contributed Papers

The paper submission timeline is as follow:

Submission link: https://cmt3.research.microsoft.com/TRICKY2024

Guidelines

We invite submission of 14 page (following the ECCV 2024 template and excluding references) or extended abstracts (to not be considered a publication in terms of double submission policies, they should be shorter than the equivalent of 4 pages in CVPR template format) on topics related to transparent and reflective object understanding. Reviewing of abstract submissions will be double-blind. The purpose of this workshop is to discuss and open new directions of research for transparent and reflective objects understanding . The accepted full papers will be published as part of the official ECCV 2024 workshop proceedings. 

As part of the workshop schedule, 6 submitted papers will be selected for a spotlight presentation as contributed talks, and up to 20 posters will be presented during the poster session. The goal is to encourage exploration and discussion of promising alternative methods whether or not they yet outperform standard approaches.

Topic of interest are object classification, detection, tracking, reconstruction, depth and pose estimation from imaging data for non-Lambertian objects (transparent and specular). It is suggested to the authors to take advantage of relevant existing datasets:

Our tentative program committee is composed of: Dr. Doris Antensteiner (AIT), Philipp Ausserlechner (TU Wien), Dr. Dominik Bauer (Columbia University), Alex Costanzino (University of Bologna), Hrishikesh Gupta (TU Wien), Peter Hoenig (TU Wien), Matteo Poggi (University of Bologna), Prof. Luigi Di Stefano (University of Bologna), Tessa Pulli (TU Wien), Pierluigi Zama Ramirez (University of Bologna), Dr. Stefan Thalhammer (UAS Technikum Vienna), Fabio Tosi (University of Bologna), Prof. Markus Vincze (TU Wien), Dr. Jean-Baptiste Weibel (TU Wien)

Program

tricky-schedule

Invited Speakers

"Neural Representations for Real-time View Synthesis, 3D Asset Generation, and Beyond"

Dr. Michael Niemeyer

Dr. Michael Suppa

"3D Perception of Photometrically Challenging Scenes and Objects with Multi-Modal Data"
Dr. Benjamin Busam

Prof. Jeff Ichnowski

"Towards a Geometric Understanding in Egocentric Videos"

Dr. Diane Larlus

"Towards Photorealistic Digital Twins"

Prof.  Manmohan Chandraker

Contact

Feel free to contact us if you have any question at tricky2024-organizers@googlegroups.com

Organizers