Weighted Intersection over Union (wIoU) for Evaluating Image Segmentation
Yeong-Jun Cho
Chonnam National University
Pattern Recognition Letters 2024
ABSTRACT
In recent years, many semantic segmentation methods have been proposed to predict label of pixels in the scene. In general, we measure area prediction errors or boundary prediction errors for comparing methods. However, there is no intuitive evaluation metric that evaluates both aspects. In this work, we propose a new evaluation measure called weighted Intersection over Union (wIoU) for semantic segmentation. First, it builds a weight map generated from a boundary distance map, allowing weighted evaluation for each pixel based on a boundary importance factor. The proposed wIoU can evaluate both contour and region by setting a boundary importance factor. We validated the effectiveness of wIoU on a dataset of 33 scenes and demonstrated its flexibility. Using the proposed metric, we expect more flexible and intuitive evaluation in semantic segmentation field are possible.
OVERVIEW
We observed that it is more difficult to infer the label of the boundary of each object. This is because the inference probability of each label is mixed at the boundary. Unfortunately, the details of an object are usually at the boundary. Even if most of the object region is predicted correctly, if it fails to predict the boundary of the object, it looks qualitatively bad. In addition, missing object details can lead to poor results in applications of semantic segmentation.
To evaluate segmentation results while considering these issues, we first generate a weighted map based on a distance transform. The weight map emphasizes the importance of the object's boundaries. We then calculate the weighted intersection over union (wIoU) using the generated weight map.
We generate a distance map that consists of a distance value of each point p by computing the smallest distance from the back-ground region as follows.
Figure 1. Examples of distance maps
The distance map of each object has different distance range due to the difference in size between object. To handle this, we regularize the distance as Dc(p). Finally, we designed a weighted map W(p) which is an exponential decay function with different alpha values. Where alpha > 0 is a boundary importance factor.
Based on the calculated weight map, we propose a new evaluation metric called Weighted Intersection over Union (wIoU).
RESULTS
We validated the effectiveness of wIoU on a dataset of 33 scenes and demonstrated its flexibility. Using the proposed metric, we expect more flexible and intuitive evaluation in semantic segmentation field are possible.
Figure 2. Ground truth and its weight maps according to the boundary importance factor alpha. The range of weight map is [0,1]. White and black pixels denote weight values 1,0, respectively. A small boundary importance factor leads a uniform weight map. On the other hand, a large boundary importance factor leads high weights around the boundaries of the objects and regions.
Figure 3. Comparison of evaluation results based on different metrics in real scenes. First and second rows are ground-truth segments, and error map of predicted segmentation results. Third row images are weighted map of wIoU (alpha=1). The last row illustrates evaluation results based on each metric.
🖇️ Citation
@article{cho2024weighted,
title={Weighted Intersection over Union (wIoU) for evaluating image segmentation},
author={Cho, Yeong-Jun},
journal={Pattern Recognition Letters},
year={2024},
publisher={Elsevier} }