Research

Scene Analysis and Object Mapping from Multi-Sensor Data

Public access to massive repositories of geotagged street-level and satellite imagery has changed our ways of perceiving and exploring the world. My project aims at leveraging the immense potential of this data to perform automatic high-accuracy mapping of objects and scenes remotely thus dramatically reducing the cost of such operations and automating tasks that could previously only be carried out manually. In recent years most population clusters on Earth have been covered profusely by street level and high-resolution satellite imagery made publicly available by providers such as Google, Mapillary, OpenStreetCam, etc. This image data constitutes an invaluable source of information about the environment and infrastructure that many applications including autonomous navigation, planning and asset management could benefit from. At the moment proprietary and crowd-sourced map providers are actively encouraging users to manually contribute information about objects and their positions. Nevertheless, the majority of places and objects along with some of their attributes can be mapped automatically with the help of computer vision tools.

The goal of this work is to develop cutting-edge solutions to increase the accuracy and consolidate the methodology for multi-sensor scene analysis which will have an impact on the usability of this abundant data. Advanced deep learning and statistical data fusion techniques will be designed to help bridge the gap between multi-sensor image data recorded in different projections, formats and resolutions. The particular focus in this project will be on the mapping of stationary objects (street furniture, façade elements, etc.) performed on multi-sensor data, such as street view images, high-resolution satellite or airborne optical imagery and 3D point cloud data. The main questions addressed in this project are: Can complex scenes depicting multiple objects be processed automatically? Can the existing public multi-sensor datasets be used for detail-sensitive scene analysis without resorting to costly and time-consuming follow-up data acquisition campaigns?


Semantic analysis of multi-sensor imaging data:

  • Scene analysis

  • Object detection

  • Classification


Key questions:

  • Automatic complex scenes analysis (multiple objects)

  • Recycling existing image datasets

  • Efficient detail-preserving fusion of multi-sensor data


Related publications:

  • V. A. Krylov, E. Kenny, R. Dahyot. "Automatic Discovery and Geotagging of Objects from Street View Imagery". Remote Sensing. Vol. 10, no. 5. May 2018. [link] [pdf] [Presentation] [YouTube] [Github]

  • V. A. Krylov, R. Dahyot. "Object Geolocation using MRF-based Multi-sensor Fusion", IEEE International Conference on Image Processing ICIP 2018, Proc. of IEEE ICIP 2018, Athens (Greece), October 7-10, 2018. [link] [pdf] [poster] [abstract] [bibtex]

  • V. A. Krylov, R. Dahyot. "Object Geolocation from Crowdsourced Street Level Imagery", International Workshop on Urban Reasoning, European Conference on Machine Learning ECML 2018. Proc. of ECML 2018, Dublin (Ireland), September 14, 2018. [link] [pdf] [presentation] [abstract] [bibtex]