Chan-Vese Active Contours

A different interpretation of Chan-Vese Active Contour from the view of Optimal Global Thresholding

posted Sep 24, 2010, 11:15 AM by Yue Wu

The original energy function of CV is defined as 
                                                                                                           2                                                       2
                    E_cv(C) = F1(C)+F2(C) = Integrate<interior of C> |U(x,y)-u1|   +  Integrate<exterior of C> |U(x,y)-u2|
                   
                    where U(x,y) represents the pixel located at (x,y) in the 2D image U and u1 and u2 are the average for the interior region and the exterior region respectively.

Finally, the authors tried to find a contour C such that E_cv is minimized

The above energy function can be understood as the OTSU Optimal Global Thresholding Scheme <See http://en.wikipedia.org/wiki/Otsu's_method>

The OTSU's method is applied directly on the histogram of an image, it searched the thresholding T,such that:
                   1. the histogram is separated into two groups, C_a (above T) and C_b (below T)
                   2. E_otsu(T) = W_a*Var_a+W_b*Var_b is minimized
                       Here W_a = sum[ Pr(i>=T) ], W_b = sum[ Pr(i<T) ] and Var_x is the variance of the group C_x
                       
Although OTSU's method is derived for image histogram, it can be easily adapt to the image itself
Then we can say 
                                                         2   
                 E_otsu(T) = sum[ (U(x,y) -u) ] , where u = u_b is the mean of C_b, when U(x,y)<T 
                                                                        u = u_a is the mean of C_a, when U(x,y)>=T

Reader may noticed that both W_x and Var_x in the original E_otsu formula disappeared, this is because:
                   1. W_x is the weight of the group x and W_x = N [C_x]/ N[ U ], i.e. the ratio of the number of pixels in the group x to the number of pixels in image U
                                                   2
                   2. Var_x = (C_x - u_x) / N[C_x]
                                                                                                                                                                      2
Therefore,    E_otsu(T) =   SUM<x = a; x = b> { N[C_x]/ N[ U ] * SUM<U(p,q) is in the group of C_x>{ [ (U(p,q) -u_x) ]}/ N[C_x] }
                                                                                                                                                           2
                  E_otsu(T) =  1/N[ U ]  SUM<x = a; x = b> { SUM<U(p,q) is in the group of C_x>{ [ (U(p,q) -u_x) ]} }
                                                                                  2                                                   2
                                 =  SUM<in group Ca> |U(p,q)-u_a|   +  SUM<in group Cb> |U(p,q)-u_b|            
                                    <NOTE:  1/N[ U ]  is removed for it is a constant that will not influence the optimization problem>
                                 
 Now you see, how close forms does E_cv(C) and E_otsu(T) have? 
 Finally, if one start to think the difference between the gray threshold T  and the contour C, you will find that they are actually doing the same thing:
                 classified the given image U into two parts, 
                 except the groups defined on C are called the interior region and the exterior region
                           and the groups defined on the single threshold T  are called the region above the threshold and the region below the threshold

In other words, for a given threshold T, there is always a contour C' such that T and C' have the identical partitions on the image U. 

If you agree with this fact, then  E_cv(C) is an extension of E_otsu(T) in the sense of using a more general class of contours and of trying to solve a local minimization problem other than the global one. 
 
                  

MATLAB CODES FOR ACTIVE CONTOURS

posted Mar 25, 2009, 4:41 PM by Yue Wu   [ updated Mar 25, 2009, 7:54 PM ]

I implemented Chan-Vese active contours, i.e. active contours without edges, active contour without edges for vector image and active contours with multiphases.
 
Here are some demos:
 
 
 
 
Zip file "Chan-Vese Active Contours" contains all my MATLAB codes.
 
For more details about what's about Chan-Vese active contours, please see the Introduction page.
 
For more details about my matlab function, please see HELP.pdf
 
For more details about installations and what , please see README. txt
 
Here are my slide show for presentation and report on this task.
 
 
  • Chan-Vese Active Contours.zip   964k - Mar 25, 2009, 7:46 PM by Yue Wu (v1)
  • PPT.zip   3726k - Mar 25, 2009, 5:10 PM by Yue Wu (v1)
    ‎PPT for Chan-Vese Active contours‎
  • Yue Wu's Report.pdf   517k - Mar 25, 2009, 5:08 PM by Yue Wu (v1)
    ‎Report‎
  • READ ME.txt   4k - Mar 25, 2009, 5:08 PM by Yue Wu (v1)
    ‎Read me file ‎
  • HELP.pdf   791k - Mar 25, 2009, 5:08 PM by Yue Wu (v1)
    ‎Help file for Chan-Vese Active Contours code‎
Showing 5 files from page Download files.
 
 
All Copy Rights Reserved
 
Note: all codes can only be used for research or study purposes. 

A SIMPLE INTRODUCTION OF ACTIVE CONTOUR WITHOUT EDGES

posted Mar 24, 2009, 11:38 PM by Yue Wu   [ updated Sep 15, 2009, 7:11 PM ]

ACTIVE CONTOUR WITHOUT EDGES

 

Last updated 03/23/2009

By Yue Wu

All Rights reserved
 
  1. Problem Introduction

  2. Mathematical Description

  3. Numerical Algorithm

  4. Evaluations on real images

  5. Differences between methods

  6. Reference

1 .  Problem Introduction

Simply speaking the problem is: given an image I, how to find and what IS its segmentation and how to find it.

For example, if a following brain image is given, the problem could be described to "what is the image component inside of the skull", i.e. "how to truncate the part of the brain image bounded by the 'white' skull contours ".

 

fig.1 Original MRI brain image

There are many approaches for you to get the segmentation of the skull shape. Even those simple ones, like a simple sobel mask, will give you good detection on the contour of the skull. In order to get a better edge map, you may use Canny Edge detector. Furthermore, you can get a segmentation below. 


fig.2 Skull segmentation of the brain image
 
Of course, the above description of segmentation for this brian image is not that precisely, because we want to know the details of the brain but not everything inside of the skull. However, the segment of brain is not a easy problem. Although you could use 'high edge value' and 'low edge value' to distinguish the contour of the skull and that of the brain,  the method used here is called active contours, more precisely Chan-Vese Active Contours without edges.fig. 3 Brain segmentation of the brain image
 
Clearly later segment tells much more useful information and it gives more details.
 
In this passage, we mainly talk about the idea, algorithm, and realization of this Chan-Vese Active Contour.

2 .  Mathematical Description

CHAN-VESE active contour algorithm comes from segmentation problem formulated by Mumford and Shah. As a result of lacking formula input tools in this site I use image file for those formulas and Greek symbols.

 

Mumford and Shah’s minimization problem:

Chan-Vese active contours without edges

                                        In order to understand what's going on with this idea, let's see some figures from the original paper.

In the paper: Active contours without edges [1], the first two terms have been interpreted to two forces. The first term is the force to shrink the contour.  The second term is the force to expand the contour. These two forces get balanced when the contour reaches the boundary of our interested object.

For example, let's see the following four cases. Simply define everything in black = -1 and everything in gray = 1. And here c1 and c2 could be interpreted to be the mean value of everything inside of the contour C and the mean value of everything outside of the contour C, respectively. Here u0 stands for the entire image.

Case one:
the initial contour covers the whole object (-1) and some gray region (+1). Thus c1 is about zero something and c2 is 1. Notice the integral above is only respect to inside region of u0 or outside region. Clearly, if use everything outside of the contour to minus c2, we will get zero. Hence the second term F2 is zero. Because c1 is about zero, when we use everything inside of the contour to minus c1 and find the sum of the squares as the formula showed, we will reach some big positive number. So F1> 0. 
Now F1>0 but F2=0. Therefore, the contour will shrink itself in the next step.

Now you might be able to explain what's going on with the contours in other cases.
 

Finally, when our contour C reaches the boundary of the object, as case four shows, F1 = 0 and F2 = 0. As a result, the contour C reaches its equilibrium. Hence we find the contour of the object and thus we could get its segmentation.

                                       

Level set formulation

 
 

 

 

 

3 .  Numerical Algorithms

Analytic approximations for Heaviside and Dirac delta functions

Note: there are many different ways to get a 'continuous' version of Heaviside function and its corresponding delta function.
For more information please see Wikipedia.

  

Analytic approximations for image curvature

 

Approximation on Phi

Actually there are several ways to achieve the approximation of this Phi (level set 0 function).
One way is discussed in the paper, for more details, please check the reference.
Breifly speaking, by discretizing Phi, the author achieved following equation. As you can see, this is an implict equation about
Phi(n+1). Good news is that this equation is still linear and thus it is solvable. However, bad news is it needs more steps and more complicated calculations.  
 

 
 
If you want to solve the equation via this way, please check the paper and it has all necessary steps.
All in all, finally this equation is solvable and thus we can get Phi(n+1) and the problem solved. (Because Phi will stop to evolve once it reached the boundaries of our interested object.)
 

4.  Evaluation on real images

Chan-Vese active contours without edges[1] on gray scale image


If you interested in how to get these results and what are their Matlab codes, please check the page: MATLAB CODES FOR ACTIVE CONTOURS
Here is a video clip on YouTube uploaded by me.
In this clip, you will see how we find the sketch of the Micky.


 

Chan-Vese active contours without edges for vector image[2]

 

Active contours without edges for vector image[2] is similar to active contours without edges[1].

The major difference is:

 

Here c(i) just defined to different image components. Like in RGB image, there are R, G, B three color components. These c(i), i.e. mean value of the contour for each color component.

In previous Chan-Vese active contours without edges, we only apply those 'force balance' things to one 2D image. However, now for Chan-Vese active contours without edges for vector image, we apply the same algorithm to one 3D image, which could be consider to be 3-2D-image. Because the algorithm works well on 2D image, we can not deal with 3D image.

You might think we should apply the algorithm on each component and then we will reach a nice contour. Unfortunately, if you do that, you will get one contour on each component (2D image). And probably these three are not identical.

Therefore the right way to apply the 2D algorithm to 3D problem here is not like above. Recall our objective and it is still to get one and only one neat contour(s) about our interested objects. 

As you see in the formula above, the force thing could be perfectly interpreted in Dynamics. We are still looking for two forces: one tending to shrink the contour and the other tending to expand the contour. Instead of having only one component, now we have three. We firstly find the component of forces on these three 2D images. And then we calculate their resultant forces. Finally, the resultant forces control the movement of our contour.

This approach is very good to many noisy image, where some or many pixels of the image have been added noises. General speaking, many edge detection algorithm will mistakenly catch those noise, as a result of these noisy pixels are not distinguishable from those information pixels. However, in this algorithm, unless the noise contaminated all three components at the same pixel, we can resist the noise influence to our edge detection.

Here is an example, right image have been added two types of noises in two components and become really noisy.

 We apply Chan-Vese active contours without edges for this noisy image and following thins are our result.

 The result is really nice.
 

 

Multiphase Chan-Vese active contours without edges [3]

Simply speaking, in previous two approaches we discussed above, we only try to find one 'active contour'. As we can see from the result, one active contour can only tell us two things: what is the foreground of the image and what is the background. For more details, they fail to tell.
Therefore, for any image which has more than two colors, previous approach cannot distinguish them. They will either get two colors in foreground or catch two colors in background.

How to solve the problem above? Naturally we will think about getting more contours. If we have 'N' contours, we should be able to distinguish 2^N colors.

I should be able to add more information for this topic later.

If you are interested in this part, please see its reference paper.

Here are two examples:

Following image has several complicated structures: triple junction, quadruple junction, different colors and different shapes. One phase active contours cannot distinguish all objects in the image, while two-phases active contours could achieve this goal.

Above result shows that multiphase active contours could distinguish much more objects and automatically detect every edge in the input image. This method can easily handle different complicated topological changes.  


 

In this photo image, background is sky and foreground is two red and two yellow flowers. 

By using multiphase active contour, we can get the segmentation as follows.

 

    Really cool, isn't it?


5 .  Differences between methods

Although we talked something about the difference between these methods, you might still wonder what are exactly differences between above methods? Which one gives the best quality and which one works fastest? This section is trying to show you their differences.

1. Gray image or Color image?

Treating image as a gray image or a color image will lead us to two different approaches.
If you treat an image to a gray one, then we only need to care about two image forces as we mentioned in previous section.
However, if you'd like to treat the image to a color one, then you have to apply the idea for vector images, which will calculate all components of forces.
Generally speaking, considering the image to a gray one will simplify the segmentation question and lead to simple calumniation but lose some image information.
Considering the image to a color one will be helpful to denoise the image but lead to more calculations.

Therefore, if you have a clean image, which means no noise, then probably you can treat it as a gray image. Otherwise, I will treat it to a vector image for trading calculation complexity for better denoising ability.

Recall the example in section 4 about denoising. Let's see what happens when we treating the same image as a gray one.

 
It is not hard to convince yourself why we get a result above, isn't it?
However, when we consider it to a vector image, the strong denoising ability shows.

We get a very neat segmentation image comparing to the noisy input image.  And this segmentation image is almost perfect match the original clean image.

2. Single phase or multiphases?

Treating a image to single phase means you could only 'divide' the original image into two parts (No matter how you interpreted here, i.e. two colors, background and foreground, etc. ).
Treating a image to multiphase means you could 'divide' the  original image into more parts, more precisely 2^N parts, where N is the number of phases.
So if the objects you are interested in are not that 'simple' (like a single object, a homogeneous region and etc), probably you should apply multiphase for better results. However you have to trade time off for multiphase. If you do not have a fast machine, it will be a really painful experience to wait the result.
But if you only want detect homogeneous region, like text segmentation, single phase is a good choice for this kind of task.

Here is the result if we apply single phase active contour to 'flowers.jpg'.

As we mentioned before, single phase could only 'divide' image into two parts. In above result, we see finally the algorithm recognize  every 'bright' thing to the background and every 'dark' thing to the foreground. Therefore some 'highlights' on the stem and the flowers have been reorganized to the background. However, we can still recognize 'flowers' in the result, at least the algorithm separated the sky and the flowers.
More precisely, there are at least 5 colors in the original image, blue, white, red, yellow and green. However, we only have one phase and the algorithm have to group these five regions to two regions. How the algorithm does this procedure? Basically, the algorithm will group these regions respect to their different 'mean value'. If two regions have a close mean value than other colors, these two color will be grouped into one. Clearly, the means value of white and sky blue are much closer than others, and thus these two color regions go to one in the final segmentation.

But with multiphase, we can get more information. Not just divide the image to 'black' and 'white' regions but to more colors regions.
 

 
Although you may argue that the result is still not perfect, for me it's already good enough. As a result of two phases, we can only divide image to four regions (or you can say four colors). Now we successfully distinguished the colors about the flowers, which are grouped into one region in a single phase case. However, we still failed to distinguish 'white' in 'sky blue'. Why? Because we only have two phases and consequently we can only distinguish four regions at most. There is no way to detect a five-color-image by four-color-detection algorithm.

You may also ask Can we distinguish 'white' and 'sky blue' but not 'yellow' and 'red' or other color pairs. Or if we run the algorithm in a different way is it possible for us to get a segmentation which distinguishes 'white' and 'sky blue' but leaves other two colors together. The answer is YES.

The reason why it is possible to get a segmentation distinguishing 'white' and 'sky blue' with two phases is because our algorithm is a variance based algorithm. 

Approximate mean value for each color on all three components:
                       R       G      B                  R+G+B
Red:             212,      3,      3                    (218)
Green:           30,   28,    20                     (78)
Yellow:         197, 172 ,     1                     (370)
White:          214, 227, 241                     (682)
Sky Blue:     157, 190, 211                     (585)

Clearly, both sky blue and white have three components' mean above 120. Other three colors share the feature of a very low 'B' value.
More precisely, because we use equal weights for RGB image, R+G+B could stand for their overall mean value.  With no doubt, if we need divide above five numbers to two most different groups in mean, we should pick White and Sky Blue in one group and leave the other three colors to the other group. This is the reason why single phase case, the algorithm grouped Red, Green and Yellow to one group but White and Sky blue to the other.

Furthermore, in two-phase-case, Red Green and Yellow have more salient differences in their mean values than White and Sky blue. Say Red, Green and Yellow first in one group and White and Sky Blue in the other, let see the cost for distinguish each of it. Note the closer distance to the average, the higher cost is.

Average( Red+Green+Yellow) = 222

                      R+G+B              distance from the average
Red:                218                                     4
Green:            78                                      144
Yellow:            370                                   148


Average(White+Sky Blue) = 633.5
               
                       R+G+B              distance from the average
White:              682                                    48.5
Sky Blue:         585                                    48.5

As we can see here, it cost less to distinguish Green and Yellow out of the (Red+Green+Yellow) group than to distinguish White or Sky Blue.


Note: the actual situation might not be exactly the same, because the active contour might also get a different segmentation as a result of different initial conditions.



6. Reference

 

 
1. Chan, T. F., & Vese, L. A. (2001). Active contours without edges. IEEE Transactions on Image Processing, 10(2), 266‐277.
 
2. Chan, T.F., & Sandberg Y. B(2000). Active contours without edges for Vector‐valued Image. Journal of Visual Communication and Image Representation 11, 130–141 (2000)
 
3. Chan, T. F., & Vese, L. A. (2002). A Multiphase level set framework for image segmentation using the Mumford and Shah model. International Journal of Computer Vision 50(3), 271–293, (2002)
 
4. Kass, M., Witkin, A., & Terzopoulos, D. (1988). Snakes: Active contour models. International Journal of Computer Vision, 1(4), 321‐331.
 
5. J. A. Sethian, Level Set Methods and Fast Marching Methods: Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision, and Materials Science (Cambridge ... on Applied and Computational Mathematics), Cambridge University Press; 2 edition (1999)
 
6. Matlab Help, Mathworks Inc.
 
7. http://www.wikipedia.com
 
8. Shawn Lankton, Active Contour Matlab Code Demo,
http://www.shawnlankton.com/?s=active+contour

1-3 of 3