The first one-day conference on vision transformers, on the occasion of ACCV 2022, is an exciting chance to present and discuss vision transformers and their applications in various computer vision sub-fields. The workshop’s motivation is to narrow the gap between the research advancements in transformer designs and applications utilizing transformers for various computer vision applications.
Transformer models have demonstrated excellent performance on a diverse set of computer vision applications ranging from classification to segmentation on various data formats such as images, videos and 3D. The ambition of this workshop is to bring together computer vision and machine learning researchers working towards advancing the theory, architecture, and algorithmic design for transformer models, as well the practitioners utilizing transformer models for developing novel applications and use cases.
University of Central Florida
University of California at Merced; Google
Università degli Studi di Modena e Reggio Emilia
The best paper in the workshop will receive an award worth USD 1000.
We accept paper submissions to our workshop. All submissions should follow the ACCV2022 author guidelines.
Call for paper: pdf
Paper Submission Due: September 09th, 2022
Notification to Authors: September 25th, 2022
Extended Paper Submission Due: October 3rd, 2022
Note: we accept rejected papers from top conferences such as CVPR/ECCV/ACCV/BMVC until October 3, but in this case, the authors are expected to attach the complete reviewer comments (all reviewers/meta-reviewer comments, initial and final ratings) they received in such conferences. In addition, the authors are expected to mention the conference/journal, and paper ID. All these information should be attached with the paper in single pdf.
Extended Notification to Authors: October 10th, 2022
Camera-ready Deadline: October 12th, 2022
The workshop is organized in conjunction with the
The 16th Asian Conference on Computer Vision (ACCV2022)
Vision Transformers: Theory and applications 2022
visiontransformer.accv [ at ] gmail.com
© VTTA-2022