Relevant projects showroom

This is a "research showroom" for some of my past projects, where you can find links to my papers and code available for non-commercial purpose. Your comments and suggestions are more than welcome here!

Gaussian mixture model for gridded data

In order to fit Gaussian mixture models (GMM) on the brain space, the extra formulation of GMM is needed. The traditional GMM fits the Gaussian models on x,y,z and v space, thus, usually does not yield Gaussianity on the x,y,z brain space. One way around this problem is to resample the x,y,z space according to its corresponding value v(x,y,z). Such approach is not computationally efficient. Therefore, I propose an alternative formulation of GMM, capable of fitting GMM on the x,y,z (or even higher-dimension) space directly from the gridded data using EM algorithm. [more]

logistic regression with l2-norm regularization for feature selection

In this article we show how to retrieve a set of good features via logistic regression with l2-norm regularization (as opposed to LASSO or l0-norm regularization). Logistic regression is a linear classifier whose parameters are weights, usually in terms of weight vector w, and lambda, the regularization parameter. After training logistic regression, w is estimated, and we show that the value of each weight represent how important that weight is to classification of the train set. Here we compare the weights and the mutual information (mi) at each feature. [more]

Static Hand Posture Recognition

I and my colleague apply a minimum-divergence-based classifier to hand posture recognition. Each hand posture image is modeled using a Gaussian mixture model (GMM) before input to the classifier. From a simple observation, we found that Cauchy-Schwarz divergence gives closed-form expression for a GMM. Therefore, the classification can be done fast, efficiently and accurately. [more]

The closed-form expression for divergence of Gaussian mixture models

Gaussian mixture model (GMM) is a very popular and powerful model, but, unfortunately, it is well known that there is no closed-form expression available for such distributions using famous Kullback-Leibler divergence. On the other hand, we show that the closed-form expression is possible using Cauchy-Schwarz divergence and we derive it. [pdf]

An information-theoretic criteria for evaluating unsupervised image segmentation

We invent an alternative criteria to evaluate unsupervised image segmentation by generalizing the traditional Precision-Recall (PR) curve. The proposed methodology incorporates nonparametric density estimation such that it is more robust to "legitimate" mismatches between the ground truth contours and the detected contours. [pdf]

Irregular Tree-Structured Bayesian Networks (ITSBN)

This is an unsupervised image segmentation algorithm using a Bayesian network whose structure is specifically learned from the context of the input image. The learned tree structure adds the dependency regularization to the framework resulting in better homogeneity in the segmentation [pdf]. MATLAB code is made available here [more].

Image Segmentation using Gaussian Mixture Model (GMM) and Bayesian Information Criteria (BIC)

A while ago, I was so amazed about the image segmentation results using Gaussian Mixture Models (GMMs) because GMM gives pretty good results on normal/natural images. But we have to provide the number of components a priori. One way to avoid the problem is to apply some regularization or penalty for complexity. There are so many criteria off the shelf, and here I would like to try BIC. [more]

Simple Image Segmentation using GMM

Image segmentation using GMM supporting multiple features extracted from the input image. The feature included in the function is, for instance, generalized RGB (gRGB), standardized CIELuv (sLuv), generalized-standardized CIELab (gsLab), standardized gray-scale, standardized x-y location of each pixel. The user can add any feature to the function directly. For more information [link], the code is available here.

k-NN Random Walk for initializing conditional probability table for a Bayesian network [pdf]

Deformable Bayesian Network with Gaussian data patch [pdf]

Under-water data clustering and fusion using deformable Bayesian networks (DFBN)

Vegetation Filtering on 3D LiDAR point cloud data

In this project, I developed an information-theoretic-based algorithm to remove vegetation and ground artifacts from 3D LiDAR point cloud data. The challenges are that the resulting surface should look clean (less noisy) whereas the ground details are significantly preserved. This can be done by learning the probabilistic models of ground and non-ground objects on the site. [more]