Search this site
Embedded Files
Taeyeop Lee
  • Making the World a Better Place
  • 🌍
Taeyeop Lee
  • Making the World a Better Place
  • 🌍
  • More
    • Making the World a Better Place
    • 🌍

TTA-COPE: Test-Time Adaptation
for Category-Level Object Pose Estimation


Taeyeop Lee1 Jonathan Tremblay2 Valts Blukis2 Bowen Wen2 Byeong-Uk Lee1

Inkyu Shin1 Stan Birchfield2 In So Kweon1 Kuk-Jin Yoon1

1KAIST 2NVIDIA

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023)

[Paper] [Preprint] [Video]

Abstract

Test-time adaptation methods have been gaining attention recently as a practical solution for addressing source-to-target domain gaps by gradually updating the model without requiring labels on the target data. In this paper, we propose a method of test-time adaptation for category-level object pose estimation called TTA-COPE. We design a pose ensemble approach with a self-training loss using pose-aware confidence. Unlike previous unsupervised domain adaptation methods for category-level object pose estimation, our approach processes the test data in a sequential, online manner, and it does not require access to the source domain at runtime. Extensive experimental results demonstrate that the proposed pose ensemble and the self-training loss improve category-level object pose performance during test time under both semi-supervised and unsupervised settings. 

Video

Experiments

Qualitative comparison with the no adaptation and our test-time adaptation.

BibTeX

@inproceedings{lee2023tta,

  title={{TTA-COPE}: Test-Time Adaptation for Category-Level Object Pose Estimation},

  author={Lee, Taeyeop and Tremblay, Jonathan and Blukis, Valts and Wen, Bowen and Lee, Byeong-Uk and Shin, Inkyu and Birchfield, Stan and Kweon, In So and Yoon, Kuk-Jin},

  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},

  year={2023}

}

Contact

If you have any questions, please feel free to contact Taeyeop Lee.

Google Sites
Report abuse
Google Sites
Report abuse
This site uses cookies from Google to deliver its services and to analyze traffic. Information about your use of this site is shared with Google. By clicking "accept", you agree to its use of cookies. Cookie Policy
Reject
Accept