TRICKY 2025
Transparent & Reflective objects In the wild Challenges
19th October 2025 - Afternoon Session
19th October 2025 - Afternoon Session
Pieter Claesz, Public domain, via Wikimedia Commons
The advancements in object detection, segmentation, and pose estimation, exemplified by the COCO, LVIS, and BOP challenges, demonstrate the rapid progress of state-of-the-art methods. However, their common (explicit or implicit) assumption that objects are Lambertian, i.e., only generate diffuse reflections of light, is an oversimplification of the actual visual world. For non-Lambertian objects, such as glass or metal, the specific scene arrangement leads to variations in appearance that extend beyond mere texture and occlusion changes. For instance, objects are not only directly observable but also seen through reflection or refraction, depending on their relative position to transparent objects. Additionally, the appearance of specular highlights is influenced by light and camera location. Depth sensing also assumes a “Lambertian world” and consequently fails to accurately measure the distance to transparent objects. Consequently, the performance of current approaches, irrespective of the input modality, rapidly deteriorates when confronted with such tricky scenes.
The workshop on Transparent & Reflective objects In the wild Challenges (TRICKY) aims to address object classification, detection, tracking, reconstruction, depth, and pose estimation from imaging data for such challenging objects. This will highlight and identify the associated challenges in these tasks, thereby advancing the state-of-the-art. A primary focus will be on the applicability of methods in unconstrained scenarios, such as natural scene arrangements, mixtures of Lambertian and non-Lambertian objects, and varying illumination.
This will be achieved with a depth and object pose estimation challenge, as well as 4 invited talks. The workshop will also include 6 spotlight talks and up to 12 posters of contributed works to encourage the discussion of novel research directions.
You will be able to join the workshop virtually.
Depth and pose estimation are critical for enabling machines to interact effectively with the real world. While depth estimation provides the spatial structure of a scene, pose estimation localizes and orients objects within it, both fundamental for robotics, augmented reality, and 3D scene understanding.
Traditional depth and pose estimation approaches have achieved impressive results on standard benchmarks like KITTI, Middlebury, Linemod, and YCB. However, when these methods encounter reflective and transparent objects, their performance degrades significantly. This limitation is particularly problematic as these challenging materials are common in everyday environments.
For this reason, we are organising TRICKY 2025, featuring two complementary challenges that aim to encourage the development of next-generation computer vision algorithms capable of advanced reasoning on non-Lambertian objects:
Monocular Depth Track: Enhance pipeline designs for single-view dense depth map prediction from RGB input.
More details can be found on the challenge server: https://codalab.lisn.upsaclay.fr/competitions/22870
Category-level Object Pose Track: Improve the prediction of object pose, shape, and size for object instances of a known category based on RGB-D input.
More details can be found on the challenge server: https://codalab.lisn.upsaclay.fr/competitions/23075
The challenges build upon our experience organizing workshop challenges in conjunction with ECCV 2024 (TRICKY 2025), as well as CVPR 2023/2024/2025 at both the NTIRE and BOP/R6D workshops.
The challenge is organised according to the following timeline:
Development Phase (May 13th - June 8th extended June 22th): Release of training (images and ground truths) and validation data (only images) to all registered participants. Participants can upload their validation results to our server and receive immediate feedback based on an automated comparison with the hidden ground truths;
Test Phase (June 9th - June 15th changed June 23th - June 29th): Release of test data (only images). Participants must submit their final predictions and a description of their methods before the deadline. We will accept submissions of both published and novel techniques to assess recent advancements in the field. Novel methods will be invited to submit papers to the workshop;
Fact sheet / Code / Model Submission Deadline (June 16th extended June 30th);
Release of Final Leaderboard (June 17th extended July 1st);
Invited Talk Notification (August 28th): The organizers plan to reserve one or more slots to present particularly innovative methods.
The paper submission timeline is as follows:
Paper submission deadline: 20th June 2025 extended 4th July 2025;
Author Notification: 25th June 2025 extended 9th July 2025;
Camera-ready version: 25th July 2025.
Submission link: https://cmt3.research.microsoft.com/TRICKY2025
We invite submission of 8 pages (following the ICCV 2025 template and excluding references) or extended abstracts (not to be considered a publication in terms of double submission policies, they should be shorter than the equivalent of 4 pages in ICCV template format) on topics related to transparent and reflective object understanding.
Reviewing of abstract submissions will be double-blind. This workshop aims to discuss and open new research directions for the understanding of transparent and reflective objects. The accepted full papers will be published in the official ICCV 2025 workshop proceedings.
As part of the workshop schedule, 6 submitted papers will be selected for a spotlight presentation as contributed talks, and up to 20 posters will be presented during the poster session. The goal is to encourage exploration and discussion of promising alternative methods, whether they outperform standard approaches.
Topics of interest are object classification, detection, tracking, reconstruction, depth, and pose estimation from imaging data for non-Lambertian objects (transparent and specular). It is suggested to the authors to take advantage of relevant existing datasets:
HAMMER dataset for depth prediction tasks, as it includes multiple sensors (including polarisation);
HouseCat6D dataset for pose estimation tasks;
XYZ dataset for depth estimation tasks;
Booster dataset for depth predictions from monocular or stereo images.
Our tentative program committee is composed of: Doris Antensteiner (Austrian Institute of Technology, Austria), Benjamin Busam (Technical University of Munich, Germany), Alex Costanzino (University of Bologna, Italy), Luigi Di Stefano (University of Bologna, Italy), Junwen Huang (Technical University of Munich, Germany), Weihang Li (Technical University of Munich, Germany), Matteo Poggi (University of Bologna, Italy), Fabio Tosi (University of Bologna, Italy), Markus Vincze (TU Wien, Austria), Jean-Baptiste Weibel (TU Wien, Austria), Guangyao Zhai (Technical University of Munich, Germany), Pierluigi Zama Ramirez (University of Bologna, Italy).
13:30 - 13:40 - Welcome and opening
13:40 - 14:10 - Dr. Anton Obukhov
14:15 - 14:45 - Prof. Andrea Tagliasacchi
14:50 - 15:10 - Challenge presentations and winners
15:10 - 15:40 - Coffee break
15:40 - 16:10 - Prof. Ayoung Kim
16:15 - 16:45 - Prof. He Wang
16:50 - 17:00 - Closing remarks
Dr. Anton Obukhov
Prof. Andrea Tagliasacchi
Prof. Ayoung Kim
Prof. He Wang
Feel free to contact us if you have any questions at tricky2025-organizers@googlegroups.com or alex.costanzino{at}unibo{dot}it or pierluigi.zama{at}unibo{dot}it
Alex Costanzino, PhD student at CVLAB, University of Bologna, Italy;
Pierluigi Zama Ramirez, Junior Assistant Professor at CVLAB, University of Bologna, Italy;
Fabio Tosi, Junior Assistant Professor, CVLAB, University of Bologna, Italy;
Matteo Poggi, Tenure-track Assistant Professor, CVLAB, University of Bologna, Italy;
Luigi Di Stefano, Full Professor at CVLAB, University of Bologna, Italy;
Jean-Baptiste Weibel, Automation and Control Institute, TU Wien, Vienna, Austria;
Doris Antensteiner, Austrian Institute of Technology, Austria;
Markus Vincze, Automation and Control Institute, TU Wien, Vienna, Austria;
Benjamin Busam, currently Research Group Lead (then Associate Professor) at TUM, Germany;
Guangyao Zhai, PhD student at the Technical University of Munich, Germany;
Weihang Li, PhD student at the Technical University of Munich, Germany;
Junwen Huang, PhD student at the Technical University of Munich, Germany;
HyunJun Jung, PhD student at the Technical University of Munich, Germany.