ChanVese Active Contours
A different interpretation of ChanVese Active Contour from the view of Optimal Global Thresholding
The original energy function of CV is defined as 2 2 E_cv(C) = F1(C)+F2(C) = Integrate<interior of C> U(x,y)u1 + Integrate<exterior of C> U(x,y)u2 where U(x,y) represents the pixel located at (x,y) in the 2D image U and u1 and u2 are the average for the interior region and the exterior region respectively. Finally, the authors tried to find a contour C such that E_cv is minimized The above energy function can be understood as the OTSU Optimal Global Thresholding Scheme <See http://en.wikipedia.org/wiki/Otsu's_method> The OTSU's method is applied directly on the histogram of an image, it searched the thresholding T,such that: 1. the histogram is separated into two groups, C_a (above T) and C_b (below T) 2. E_otsu(T) = W_a*Var_a+W_b*Var_b is minimized Here W_a = sum[ Pr(i>=T) ], W_b = sum[ Pr(i<T) ] and Var_x is the variance of the group C_x Although OTSU's method is derived for image histogram, it can be easily adapt to the image itself Then we can say 2 E_otsu(T) = sum[ (U(x,y) u) ] , where u = u_b is the mean of C_b, when U(x,y)<T u = u_a is the mean of C_a, when U(x,y)>=T Reader may noticed that both W_x and Var_x in the original E_otsu formula disappeared, this is because: 1. W_x is the weight of the group x and W_x = N [C_x]/ N[ U ], i.e. the ratio of the number of pixels in the group x to the number of pixels in image U 2 2. Var_x = (C_x  u_x) / N[C_x] 2 Therefore, E_otsu(T) = SUM<x = a; x = b> { N[C_x]/ N[ U ] * SUM<U(p,q) is in the group of C_x>{ [ (U(p,q) u_x) ]}/ N[C_x] } 2 E_otsu(T) = 1/N[ U ] SUM<x = a; x = b> { SUM<U(p,q) is in the group of C_x>{ [ (U(p,q) u_x) ]} } 2 2 = SUM<in group Ca> U(p,q)u_a + SUM<in group Cb> U(p,q)u_b <NOTE: 1/N[ U ] is removed for it is a constant that will not influence the optimization problem> Now you see, how close forms does E_cv(C) and E_otsu(T) have? Finally, if one start to think the difference between the gray threshold T and the contour C, you will find that they are actually doing the same thing: classified the given image U into two parts, except the groups defined on C are called the interior region and the exterior region and the groups defined on the single threshold T are called the region above the threshold and the region below the threshold In other words, for a given threshold T, there is always a contour C' such that T and C' have the identical partitions on the image U. If you agree with this fact, then E_cv(C) is an extension of E_otsu(T) in the sense of using a more general class of contours and of trying to solve a local minimization problem other than the global one. 
MATLAB CODES FOR ACTIVE CONTOURS
I implemented ChanVese active contours, i.e. active contours without edges, active contour without edges for vector image and active contours with multiphases.
Here are some demos:
Zip file "ChanVese Active Contours" contains all my MATLAB codes.
For more details about what's about ChanVese active contours, please see the Introduction page.
For more details about my matlab function, please see HELP.pdf
For more details about installations and what , please see README. txt
Here are my slide show for presentation and report on this task.
All Copy Rights Reserved
Note: all codes can only be used for research or study purposes. 
A SIMPLE INTRODUCTION OF ACTIVE CONTOUR WITHOUT EDGES
ACTIVE CONTOUR WITHOUT EDGES
Last updated 03/23/2009
By Yue Wu All Rights reserved
Analytic approximations for image curvature
If you interested in how to get these results and what are their Matlab codes, please check the page: MATLAB CODES FOR ACTIVE CONTOURS Here is a video clip on YouTube uploaded by me. In this clip, you will see how we find the sketch of the Micky.
Multiphase ChanVese active contours without edges [3]
How to solve the problem above? Naturally we will think about getting more contours. If we have 'N' contours, we should be able to distinguish 2^N colors. I should be able to add more information for this topic later. If you are interested in this part, please see its reference paper. Here are two examples: Following image has several complicated structures: triple junction, quadruple junction, different colors and different shapes. One phase active contours cannot distinguish all objects in the image, while twophases active contours could achieve this goal.
Above result shows that multiphase active contours could distinguish much more objects and automatically detect every edge in the input image. This method can easily handle different complicated topological changes.
In this photo image, background is sky and foreground is two red and two yellow flowers. By using multiphase active contour, we can get the segmentation as follows.
Really cool, isn't it?
5 . Differences between methodsAlthough we talked something about the difference between these methods, you might still wonder what are exactly differences between above methods? Which one gives the best quality and which one works fastest? This section is trying to show you their differences. 1. Gray image or Color image? Treating image as a gray image or a color image will lead us to two different approaches. If you treat an image to a gray one, then we only need to care about two image forces as we mentioned in previous section. However, if you'd like to treat the image to a color one, then you have to apply the idea for vector images, which will calculate all components of forces. Generally speaking, considering the image to a gray one will simplify the segmentation question and lead to simple calumniation but lose some image information. Considering the image to a color one will be helpful to denoise the image but lead to more calculations. Therefore, if you have a clean image, which means no noise, then probably you can treat it as a gray image. Otherwise, I will treat it to a vector image for trading calculation complexity for better denoising ability. Recall the example in section 4 about denoising. Let's see what happens when we treating the same image as a gray one. It is not hard to convince yourself why we get a result above, isn't it? However, when we consider it to a vector image, the strong denoising ability shows. We get a very neat segmentation image comparing to the noisy input image. And this segmentation image is almost perfect match the original clean image. 2. Single phase or multiphases? Treating a image to single phase means you could only 'divide' the original image into two parts (No matter how you interpreted here, i.e. two colors, background and foreground, etc. ). Treating a image to multiphase means you could 'divide' the original image into more parts, more precisely 2^N parts, where N is the number of phases. So if the objects you are interested in are not that 'simple' (like a single object, a homogeneous region and etc), probably you should apply multiphase for better results. However you have to trade time off for multiphase. If you do not have a fast machine, it will be a really painful experience to wait the result. But if you only want detect homogeneous region, like text segmentation, single phase is a good choice for this kind of task. Here is the result if we apply single phase active contour to 'flowers.jpg'. As we mentioned before, single phase could only 'divide' image into two parts. In above result, we see finally the algorithm recognize every 'bright' thing to the background and every 'dark' thing to the foreground. Therefore some 'highlights' on the stem and the flowers have been reorganized to the background. However, we can still recognize 'flowers' in the result, at least the algorithm separated the sky and the flowers. More precisely, there are at least 5 colors in the original image, blue, white, red, yellow and green. However, we only have one phase and the algorithm have to group these five regions to two regions. How the algorithm does this procedure? Basically, the algorithm will group these regions respect to their different 'mean value'. If two regions have a close mean value than other colors, these two color will be grouped into one. Clearly, the means value of white and sky blue are much closer than others, and thus these two color regions go to one in the final segmentation. But with multiphase, we can get more information. Not just divide the image to 'black' and 'white' regions but to more colors regions. Although you may argue that the result is still not perfect, for me it's already good enough. As a result of two phases, we can only divide image to four regions (or you can say four colors). Now we successfully distinguished the colors about the flowers, which are grouped into one region in a single phase case. However, we still failed to distinguish 'white' in 'sky blue'. Why? Because we only have two phases and consequently we can only distinguish four regions at most. There is no way to detect a fivecolorimage by fourcolordetection algorithm. You may also ask Can we distinguish 'white' and 'sky blue' but not 'yellow' and 'red' or other color pairs. Or if we run the algorithm in a different way is it possible for us to get a segmentation which distinguishes 'white' and 'sky blue' but leaves other two colors together. The answer is YES. The reason why it is possible to get a segmentation distinguishing 'white' and 'sky blue' with two phases is because our algorithm is a variance based algorithm. Approximate mean value for each color on all three components: R G B R+G+B Red: 212, 3, 3 (218) Green: 30, 28, 20 (78) Yellow: 197, 172 , 1 (370) White: 214, 227, 241 (682) Sky Blue: 157, 190, 211 (585) Clearly, both sky blue and white have three components' mean above 120. Other three colors share the feature of a very low 'B' value. More precisely, because we use equal weights for RGB image, R+G+B could stand for their overall mean value. With no doubt, if we need divide above five numbers to two most different groups in mean, we should pick White and Sky Blue in one group and leave the other three colors to the other group. This is the reason why single phase case, the algorithm grouped Red, Green and Yellow to one group but White and Sky blue to the other. Furthermore, in twophasecase, Red Green and Yellow have more salient differences in their mean values than White and Sky blue. Say Red, Green and Yellow first in one group and White and Sky Blue in the other, let see the cost for distinguish each of it. Note the closer distance to the average, the higher cost is. Average( Red+Green+Yellow) = 222 R+G+B distance from the average Red: 218 4 Green: 78 144 Yellow: 370 148 Average(White+Sky Blue) = 633.5 R+G+B distance from the average White: 682 48.5 Sky Blue: 585 48.5 As we can see here, it cost less to distinguish Green and Yellow out of the (Red+Green+Yellow) group than to distinguish White or Sky Blue. Note: the actual situation might not be exactly the same, because the active contour might also get a different segmentation as a result of different initial conditions. 6. Reference
