 Gaussian mixture model for gridded dataIn order to fit Gaussian mixture models (GMM) on the brain space, the extra formulation of GMM is needed. The traditional GMM fits the Gaussian models on x,y,z and v space, thus, usually does not yield Gaussianity on the x,y,z brain space. One way around this problem is to resample the x,y,z space according to its corresponding value v(x,y,z). Such approach is not computationally efficient. Therefore, I propose an alternative formulation of GMM, capable of fitting GMM on the x,y,z (or even higherdimension) space directly from the gridded data using EM algorithm. [more] 
 logistic regression with l2norm regularization for feature selectionIn this article we show how to retrieve a set of good features via logistic regression with l2norm regularization (as opposed to LASSO or l0norm regularization). Logistic regression is a linear classifier whose parameters are weights, usually in terms of weight vector w, and lambda, the regularization parameter. After training logistic regression, w is estimated, and we show that the value of each weight represent how important that weight is to classification of the train set. Here we compare the weights and the mutual information (mi) at each feature. [more] 

Static Hand Posture Recognition
I and my colleague apply a minimumdivergencebased classifier to hand posture recognition. Each hand posture image is modeled using a Gaussian mixture model (GMM) before input to the classifier. From a simple observation, we found that CauchySchwarz divergence gives closedform expression for a GMM. Therefore, the classification can be done fast, efficiently and accurately. [more]

 The closedform expression for divergence of Gaussian mixture modelsGaussian mixture model (GMM) is a very popular and powerful model, but, unfortunately, it is well known that there is no closedform expression available for such distributions using famous KullbackLeibler divergence. On the other hand, we show that the closedform expression is possible using CauchySchwarz divergence and we derive it. [pdf]

 An informationtheoretic criteria for evaluating unsupervised image segmentationWe invent an alternative criteria to evaluate unsupervised image segmentation by generalizing the traditional PrecisionRecall (PR) curve. The proposed methodology incorporates nonparametric density estimation such that it is more robust to "legitimate" mismatches between the ground truth contours and the detected contours. [pdf] 

Irregular TreeStructured Bayesian Networks (ITSBN)
This is an unsupervised image segmentation algorithm using a Bayesian network whose structure is specifically learned from the context of the input image. The learned tree structure adds the dependency regularization to the framework resulting in better homogeneity in the segmentation [pdf]. MATLAB code is made available here [more].

 Image Segmentation using Gaussian Mixture Model (GMM) and Bayesian Information Criteria (BIC)A while ago, I was so amazed about the image segmentation results using Gaussian Mixture Models (GMMs) because GMM gives pretty good results on normal/natural images. But we have to provide the number of components a priori. One way to avoid the problem is to apply some regularization or penalty for complexity. There are so many criteria off the shelf, and here I would like to try BIC. [more] 
 Image segmentation using GMM supporting multiple features extracted from the input image. The feature included in the function is, for instance, generalized RGB
(gRGB), standardized CIELuv (sLuv), generalizedstandardized CIELab
(gsLab), standardized grayscale, standardized xy location of each
pixel. The user can add any feature to the function directly. For more information [link], the code is available here.

 kNN Random Walk for initializing conditional probability table for a Bayesian network [pdf] 
 Deformable Bayesian Network with Gaussian data patch [pdf]

 Underwater data clustering and fusion using deformable Bayesian networks (DFBN) 

Vegetation Filtering on 3D LiDAR point cloud dataIn this project, I developed an informationtheoreticbased algorithm to remove vegetation and ground artifacts from 3D LiDAR point cloud data. The challenges are that the resulting surface should look clean (less noisy) whereas the ground details are significantly preserved. This can be done by learning the probabilistic models of ground and nonground objects on the site. [more]

 