Call for Papers

Call for Papers:

We invite authors to submit unpublished papers that follow the theme and topics of our workshop. Accepted papers will be presented at a poster sessions, and exceptional papers will be invited additionally for oral presentation. All submissions will go through a double-blind review process. Papers will be selected based on relevance, significance and novelty of results, technical merit, and clarity of presentation. Papers will be published in CVPR 2020 proceedings.


Submitted papers shall be:

  • Full length papers (up to 8 pages + references), or
  • Short papers (4 pages, incl. references) having promising results inviting further research and development
  • Survey or review papers (up to 8 pages + references) presenting a deeper dive into a pertinent track of research within omnidirectional vision

Submissions should be through the CMT website: https://cmt3.research.microsoft.com/OmniCVW2020

Format should be according to the CVPR format: http://cvpr2020.thecvf.com/submission/main-conference/author-guidelines

UPDATE 3/21/2020: Submissions are now closed.

Important Dates:


Submission Deadline: Friday, March 20 (11:59pm PST)

Author Notification: Before Friday, April 10

Camera Ready: Friday, April 17


Workshop Date: Monday, June 15, 2020 (Full Day)

Topics of Interest:


Topics of interest for the workshop include:

  • Computer vision techniques that natively use omnidirectional cameras (e.g. fisheye, catadioptric, polydioptric and 360° cameras) without image space linearization
  • Transferring perspective algorithms to omnidirectional data
  • Omnidirectional cameras for 3D geometry (e.g. epipolar geometry, stereo, SFM, MVS)
  • Visual odometry, SLAM, and SFM with omnidirectional cameras
  • Processing multi-camera networks of omnidirectional cameras
  • Adaptation of CNNs for omnidirectional images (e.g. modifying convolutional operations for omnidirectional manifolds, adapting neural networks for omnidirectional cameras)
  • New techniques for omnidirectional camera calibration
  • View-mapping using omnidirectional camera networks (e.g. adaptive vehicle surround view, immersive virtual reality, imaging stitching on non-linear surfaces, etc.)
  • Synthesizing virtual omnidirectional images from narrow FOV cameras (e.g. panorama stitching)
  • Comparison of the usage of standard rectilinear correction techniques vs direct usage of omnidirectional camera images
  • View synthesis with omnidirectional cameras (e.g. free-viewpoint video, mount removal for omnidirectional cameras)
  • Omnidirectional image compression
  • Applications in, but not limited to, augmented reality, video surveillance, medical imaging, automotive, robotics, unmanned aerial and underwater vehicles and omnidirectional simulations
  • New omnidirectional datasets

Program Committee:


  • Antonis Karakottas, CERTH / CENTRE FOR RESEARCH AND TECHNOLOGY HELLAS
  • Hazem Rashed, Valeo
  • Letizia Mariotti, Valeo
  • Michal Uricar, Valeo
  • Nikolaos Zioulis, CERTH / CENTRE FOR RESEARCH AND TECHNOLOGY HELLAS
  • Ravi Kiran, NavyaTech
  • Sumanth Chennupati, University of Michigan, Dearborn
  • Toby Burns, Maynooth University
  • Yu-Chuan Su, Google