Call for Papers

Important dates

Please see the homepage for dates.

Topics of interest

This workshop focuses on attention-based architectures in computer vision. Topics of interest include, but are not limited to:

Instructions

We invite submissions of extended abstracts on topics related to Vision Transformers and attention representation. Submissions will adhere to the CVPR 2023 paper submission style (example paper/author kit) with maximally 4 pages (excluding references). All submissions must be via CMT. If you would like to submit any supplementary material, feel free to attach the appendix to the PDF after the 4pages + references. Please note that the reviewers are not required to look at the appendix when evaluating the paper. If you have other media to attach (videos etc), please feel free to add anonymized links.

Policies

Submissions can be new breaking results or recently published works (including at CVPR'23) that are highly relevant to the workshop's topic. The accepted abstracts will not appear in the IEEE Proceedings of CVPR 2023.

Reviewing will be double-blind. Authors should refer to their prior work in the third person wherever possible. They should refrain from including acknowledgements, grant numbers, or public GitHub repository links in their submissions. If an anonymous reference is needed in the paper (e.g., for referring to the authors’ own work that is under review elsewhere), include the referred work as supplementary material as noted above. Note that anonymizing the submissions is mandatory, and papers that explicitly or implicitly reveal the authors’ identities will be rejected. A reviewer may be able to deduce the authors’ identities by using external resources, such as technical reports published on the web. The availability of information on the web that may allow reviewers to infer the authors’ identities does not constitute a breach of the double-blind submission policy. Reviewers are explicitly asked not to seek this information.