Codes‎ > ‎

Dense semantic labeling

This page presents data and codes to reproduce the results in: 

IEEE Transactions on Geoscience and Remote Sensing, vol. 55(2), pp. 881-893, 2017. doi: 10.1109/TGRS.2016.2616585

Abstract

Semantic labeling (or pixel-level land-cover classification) in ultra-high resolution imagery (< 10cm) requires statistical models able to learn high level concepts from spatial data, with large appearance variations.  Convolutional Neural Networks (CNNs) achieve this goal by learning discriminatively a hierarchy of representations of increasing abstraction.  In this paper we present a CNN-based system relying on an downsample-then-upsample architecture. Specifically, it first learns a rough spatial map of high-level representations by means of convolutions and then learns to upsample them back to the original resolution by deconvolutions. By doing so, the CNN learns to densely label every pixel at the original resolution of the image. This results in many advantages, including i) state-of-the-art numerical accuracy, ii) improved geometric accuracy of predictions and iii) high efficiency at inference time.  We test the proposed system on the Vaihingen and Potsdam sub-decimeter resolution datasets, involving semantic labeling of aerial images of 9cm and 5cm resolution, respectively. These datasets are composed by many large and fully annotated tiles allowing an unbiased evaluation of models making use of spatial information. We do so by comparing two standard CNN architectures to the proposed one: standard patch classification, prediction of local label patches by employing only convolutions and full patch labeling by employing deconvolutions. All the systems compare favorably or outperform a state-of-the-art baseline relying on superpixels and powerful appearance descriptors. The proposed full patch labeling CNN outperforms these models by a large margin, also showing a very appealing inference time. 




Code

Results of provided network might slightly differ from those in the paper, mainly due to different cropping and patching at test time.  The network for the Potsdam has been slightly improved after publication. 
THESE FILES ARE INTENDED TO SHOW AN EXAMPLE USE! However, networks in the papers can be retrained from scratch by redefining the network and parameters, by using the training function provided (results will differ from those provided). 

Enjoy! 
 
CREDIT THESE ABOVE WORKS AS WELL

DOWNLOAD DENSE SEMANTIC LABELING CODE v0 [zip]
Please cite the paper if you find this code useful for your research!



You are free to:
Share — copy and redistribute the material in any medium or format
Adapt — remix, transform, and build upon the material
The licensor cannot revoke these freedoms as long as you follow the license terms.

Under the following terms:
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
NonCommercial — You may not use the material for commercial purposes.
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.

Note that external packages (e.g. matconvnet) come with their own license.

Comments