Omnidirectional Computer Vision

in research and industry

in conjunction with IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'2020), Seattle, WA

Monday, June 15, 2020

Accepted papers will be published in IEEE Xplore!


Our objective is to provide a venue for novel research in omnidirectional computer vision with an eye toward actualizing these ideas for commercial or societal benefit. As omnidirectional cameras become more widespread, we want to bridge the gap between the research and application of omnidirectional vision technologies. Omnidirectional cameras are already widespread in a number of application areas such as automotive, surveillance, photography, simulation and other use-cases that benefit from large field of view. More recently, they have garnered interest for use in virtual and augmented reality. We want to encourage the development of new models that natively operate on omnidirectional imagery as well as close the performance gap between perspective-image and omnidirectional algorithms. This full day workshop has twelve invited speakers from both academia and industry.

Invited Speakers


Taco Cohen

ML Researcher, Qualcomm; PhD Student, University of Amsterdam

Kristen Graumann

Professor, University of Texas at Austin

Jean-Francois Lalonde

Associate Professor, Electrical and Computer Engineering Department, Universite Laval

Matthias Niessner

Professor, Visual Computing Lab, Technical University of Munich

Tomas Pajdla

Associate Professor, Czech Technical University-Prague

Davide Scaramuzza

Professor of Robotics and Perception, ETH-Zurich



Rajat Aggarwal, CEO


Albert Parra Pozo, Research Scientist

Ricoh Company

Hirochika Fujiki


Patrick Denny, Senior Expert

Wormpex AI

Gang Hua, VP and Chief Scientist

Zillow Group

Sing Bing Kang, Distinguished Scientist


  • Computer vision techniques that natively use omnidirectional cameras (e.g. fisheye, catadioptric, polydioptric and 360° cameras) without image space linearization
  • Transferring perspective algorithms to omnidirectional data
  • Omnidirectional cameras for 3D geometry (e.g. epipolar geometry, stereo, SFM, MVS)
  • Visual odometry, SLAM, and SFM with omnidirectional cameras
  • Processing multi-camera networks of omnidirectional cameras
  • Adaptation of CNNs for omnidirectional images (e.g. modifying convolutional operations for omnidirectional manifolds, adapting neural networks for omnidirectional cameras)
  • New techniques for omnidirectional camera calibration
  • View-mapping using omnidirectional camera networks (e.g. adaptive vehicle surround view, immersive virtual reality, imaging stitching on non-linear surfaces, etc.)
  • Synthesizing virtual omnidirectional images from narrow FOV cameras (e.g. panorama stitching)
  • Comparison of the usage of standard rectilinear correction techniques vs direct usage of omnidirectional camera images
  • View synthesis with omnidirectional cameras (e.g. free-viewpoint video, mount removal for omnidirectional cameras)
  • Omnidirectional image compression
  • Applications in, but not limited to, augmented reality, video surveillance, medical imaging, automotive, robotics, unmanned aerial and underwater vehicles and omnidirectional simulations
  • New omnidirectional datasets

Point(s) of Contact: