Our objective is to provide a venue for novel research in omnidirectional computer vision with an eye toward actualizing these ideas for commercial or societal benefit. As omnidirectional cameras become more widespread, we want to bridge the gap between the research and application of omnidirectional vision technologies. Omnidirectional cameras are already widespread in a number of application areas such as automotive, surveillance, photography, simulation and other use-cases that benefit from large field of view. More recently, they have garnered interest for use in virtual and augmented reality. We want to encourage the development of new models that natively operate on omnidirectional imagery as well as close the performance gap between perspective-image and omnidirectional algorithms.
OmniCV 2022 @ CVPR, New Orleans
From immersive experiences to autonomous driving to medical imaging and more, we are increasingly seeing that maximizing a camera’s field of view can solve real-world problems. In the past few years, omnidirectional imaging has grown in interest, driven in part by the desire to maximize the amount of content and context encapsulated by a single image. Fisheye cameras installed in modern vehicles and commodity omnidirectional cameras from companies like Ricoh and Insta360 have helped to increase the popularity of this imaging modality and have opened the door for more consumer-facing applications in the automotive, home services and real estate industries, among others. Our workshop seeks to provide a link between the formative research that supports these advances and the realization of commercial products that leverage this technology. We want to encourage the development of new algorithms and applications for this imaging modality that will continue to drive this engine of progress.
Wide field of view and omnidirectional imaging require special privacy considerations as these techniques capture more expansive content than the more common perspective image. Furthermore, the common application of wide field of view imaging to surveillance and its frequent use as an "always-on" sensor for vehicles and augmented reality devices raise questions about how we can leverage the power of this imaging modality while also preserving individual privacy.
We have encouraged our speakers to address these considerations in their keynotes. We also encourage authors to explore topics around the privacy implications of omnidirectional imaging, along with privacy-preserving solutions that are unique to this modality and its applications. We further encourage authors to consider ways to leverage omnidirectional vision to address societal and environmental problems, as well as other important issues, that impact billions of people around the world.
Junho Kim, Seoul National University, is a postdoctoral scholar at the Institute of New Media and Communications, Seoul National University (SNU). He received his Ph.D. in Electrical and Computer Engineering from SNU in 2025 and his B.S. from SNU in the same field, graduating Summa Cum Laude in 2020. His research focuses on computer vision, particularly in omnidirectional localization, 3D reconstruction, and scene understanding, with numerous publications in CVPR, ICCV, and ECCV. He has interned at Meta Reality Labs and Snap Research, contributing to projects on robust outdoor localization and privacy-preserving visual localization. Junho has received the Distinguished Ph.D. Dissertation Award from SNU, Doctoral Consortium Awards at ICCV 2025 and KCCV 2025, and Outstanding Reviewer Awards at ECCV 2024 and CVPR 2025.
Huajian Huang, Hong Kong University of Science and Technology), is a Postdoctoral Fellow at the Hong Kong University of Science and Technology (HKUST). Before that, he completed his PhD in Computer Science and Technology at HKUST under the supervision of Prof. Sai-Kit Yeung. His research focuses on advancing omnidirectional vision techniques to enable robotic spatial intelligence, with a particular emphasis on localization, mapping, and 3D scene understanding. Huajian contributed several foundational frameworks in this area, including 360VO (visual odometry), 360Loc (visual localization), 360VOTS (object tracking and segmentation), 360Roam.
Mike Lambeta, Meta (FAIR Robotics) & Meta's Fundamental AI Research (FAIR) is working in a robotics division team focused on developing robotic HW and algorithms that can perform various household tasks. Team work aims to drive fundamental AI breakthroughs in areas like high-level planning and dexterous manipulation skills. is one outcome from the team.
Xin Lin is a Ph.D. student at UCSD and was a research intern at Insta360, with research interests around 3D Vision, Panoramic Vision, and Video Generation. His works have been published in TPAMI, ICCV, ICLR, and ECCV.