Experiment Settings
DigitalTwinArt [1] is a concurrent work which we did not have access to evaluation code during the time of submission. Due to limited time, we selected five multi-part objects with 2 or more moving parts for evaluation. To provide a fair evaluation as we did for PARIS baseline, we evaluate each object on 3 different seeds (i.e. a separate optimization run per seed), and report the average performance across 3 runs.
[1] Yijia Weng, Bowen Wen, etc. Neural implicit representation for building digital twins of unknown articulated objects.
Visualization of test objects
Below we provide visualization of all the objects used for evaluating DigitalTwinArt. In the order of left to right, the object category & ID are: Refrigerator & 12043, Refrigerator & 12059, Refrigerator & 12066, StorageFurniture & 45694, StorageFurniture & 44853
Experiment Settings
See Table 4 below for quantitative results from our evaluation. Albeit not a full comparison with all the test objects reported in the main paper, these results should provide sufficient evidence that although DigitalTwinArt reported better performance than our compared baseline method PARIS and is significantly more stable to optimize (i.e. results from different runs show a lower variance), it still struggles with objects with more than 1 moving parts and overall under-performs Real2Code. Note that this method also does not predict the joint types, hence we only evaluate each joint assuming it's the correct type.