Circle detection

Fig. 4.2.1

After completing color detection, the next simplest task that came to our mind was shape detection. We set out to develop an algorithm that could determine any simple geometric shape (such as a rectangle) of our choice in any input image. Unfortunately, as we conducted our initial research, we found that developing an algorithm for detecting any arbitrary shape of our choice would likely be impractical in terms of the difficulty. Instead, we have decided to limit the scope of this task to the case of circles.

The successful completion of this task could potentially facilitate autonomous landing. In the case of our minidrone, we can have the drone fly around and land as soon as it sees a circle below. In the case of drone delivery, on the other hand, we could, for example, have the customer put some sort of circular sign on the ground so that when the drone hovers over it and sees it, it will land exactly there and deliver the product to the customer.

Our objective in this task is to develop an algorithm that reads an image as input and outputs

  1. the number of identified circles in the image and

  2. an image showing the exact locations of the identified circles (if the image contains at least one circle).

We started out by working with the simplest data possible - images that contain exactly one solid black circle on a touchscreen, an example of which is shown on the right. Our goal is to infer from the picture that there is exactly one circular object in the image and determine all the pixels of the image that constitute the circular object.

The performance of our algorithm (whose source code can be found in this GitHub repository) on more diverse image data (containing more than one circle) is demonstrated at the end.

Fig. 4.2.2

Circle detection algorithm

We first experimented with morphological filters such as imclose() and imopen() from MATLAB's Image Processing Toolbox to process the input image such that the resulting image would contain only the object on the touchscreen. And then we would use MATLAB's regionprops() function to extract the "circularity" property value of the object on the touchscreen. If the value is close enough to 1, then we would have the algorithm output the object on the touchscreen and an integer value of 1 indicating the presence of exactly one circular object in the image. While we made a lot of progress with this approach, we realized the limited ability of this approach (for example, tuning the parameters of the morphological filters is highly dependent on how complex the input image is). (However, our effort on studying these functions was not in vain because these functions proved to be crucial in our algorithm for chessboard coordinates determination.) Fortunately, as we tried MATLAB's Image Segmenter App out of curiosity, we came across the imfindcircles() function.

The imfindcircles() function uses the theories of circular Hough transforms to achieve consistently high performance even in the presence of noise, occlusion, and varying illumination. The input of this function is an image, a radius range, an "object polarity" parameter, and a "sensitivity" parameter, and the output of this function is the information of the centers and radii of the detected circles. The format of the input image could theoretically be RGB, but we found that for an image containing more than one circle, this is not the ideal format for circle detection. According to the theories of circular Hough transforms (see the section on mathematical background), an important conceptual step of finding the circles in an image is to locate the edge pixels. When the image is in RGB format, these edge pixels may not be easily located due to the potential complication of the color information of each pixel. Thus, we came up with a preliminary step of binarizing the input image to "help" the imfindcircles() function locate the edge pixels.

Fig 4.2.3

Fig 4.2.4

Fig 4.2.5

We use the binarized image above for the input image for the imfindcircles() function. For the radius range, we used imtool()'s ruler and found that most of the circles in our image data had a radius of about 20 pixels, so a radius range of 10 to 50 would be more than sufficient. Since our circle in the image data is black and is on a white background (the rest of the touchscreen pixels), the correct configuration for the object polarity would be "dark". However, for a more complicated situation such as an image containing one circle inside another (see sample image 7 in the "performance demonstration" section for a concrete example), this configuration would lead to "underdetection" issues (for example, detecting only the outer circle for an image that contains one circle inside another). Therefore, in order for our algorithm to work for more diverse image data, we take both configurations and combine their outputs (see the code below).

For sensitivity, lower sensitivity values make the algorithm less sensitive to circular objects (making it harder to detect circles), while higher sensitivity values make the algorithm more sensitive to circular objects (risking classifying noncircular objects as circular objects). The default sensitivity value is 0.85, but we have found that this would sometime lead to "overdetection" issues - detecting circles in an image when the image in fact does not contain an image. Through experience, we choose to use a slightly lower value of 0.8.

With the main method of finding circles sorted out, the rest of our algorithm is relatively straightforward. The size of the radii output of the imfindcircles() function tells us how many detected circles there are in the input image, and we use the meshgrid() function and some basic geometry (the Pythagorean Theorem, specifically) to output the pixels that constitute the detected circle in the input image.

Fig. 4.2.6

Fig. 4.2.7

Performance demonstration

Below, we demonstrate the performance of our algorithm on various image data. We note that we have tested our algorithm on hundreds of images, but for the purpose of illustration, we will only show a select few that cover a broad range of cases.

Fig. 4.2.8

Fig. 4.2.9

Sample image 1 is the same image that we used above in the discussion of our circle detection algorithm.

In the case of sample image 2, it is clear that the glare in the image is incorrectly identified as a circle by our algorithm.

Fig. 4.2.10

Fig. 4.2.11

For sample images 3 and 4, the "human answer" to how many circles there are in each image is 2, because, since the background color is white, we would not count the middle white portions of the circles as additional circles. However, our algorithm does not take into account whether the color of the circle is the same as the background color or not, so we would expect our algorithm to detect a total of 4 circles. However, since the radius of the smaller white circle is less than 10 pixels and that our algorithm is designed to detect only circles of radius between 10 and 50 pixels, the "algorithmic answer" (given by our algorithm) to how many circles there are in each image would be 3. We note that in sample image 3, there is not enough color contrast between the top circle and the white background, which results in the failure of our algorithm to detect the top circle.

Fig. 4.2.12

Fig. 4.2.13

For sample images 5 and 6, both circles in each image are filled (unlike sample images 3 and 4), so we would expect our algorithm to detect exactly 2 circles. Note that in sample image 6, even though the circle on the left is slightly cut off, our algorithm is still able to detect it. This is the contribution of the sensitivity value (of 0.8) in our algorithm, which controls how sensitive our algorithm is to circles.

Fig. 4.2.14

For sample image 7, we have one circle inside another, and we would expect our algorithm to detect 2 circles. Our algorithm is able to achieve this because of our approach of "combining dark and bright circles." If we configured our ObjectPolarity to simply 'dark', we would only be able to detect the outer circle, whereas if we configured our ObjectPolarity to simply 'bright', we would only be able to detect the outer circle. This is because the yellow color of the inner circle is considered brighter than the blue color of the outer circle, which is considered darker than the white color of the background. Our approach of "combining dark and bright circles" is to first detect the two circles separately and then combine them at the end.

Fig. 4.2.15

Sample image 8 is the case where we have two overlapping circles. It is clear that it is still possible for our algorithm to correctly detect both circles.

Fig. 4.2.16

Finally, sample image 9 is the most complicated case where we have three overlapping circles (which is similar to a Venn diagram). However, our algorithm is still capable of detecting all the circles in the image.

In summary, our algorithm is largely successful in the sense that it can generally successfully detect all the circles that we want it to detect (as in sample images 1, 4, 5, 6, 7, 8, and 9). The top cause of a failure to detect a circle is the lack of color contrast (as in sample image 3). However, our algorithm does frequently experience the issue of falsely detecting circles. The top cause of this is the presence of glare in an image; the glare could be falsely identified as circles (as in sample image 2) and thus our algorithm would sometimes declare the presence of circles even when the image does not contain a circle.