Academic Qualification

Academic Qualifications:

  • Completed Ph.D. (Computer Science and Engineering) from SGGS Institute of Engineering and Technology, S.R.T. Marathwada University, Nanded in April 2015.

  • Completed ME (Computer Engineering) from Bharati Vidyapeeth College of Engineering, Pune in 2006.

  • Completed BE (CSE) from MGM College of Engineering, Nanded under S.R.T. Marathwada University, Nanded in 2001.

Ph.D. work summary:

The objective behind this research work is to study some of the existing neuro-fuzzy systems, to propose new neuro-fuzzy models that can be used in the pattern classification problems and to extract classification rules from these models. These rules will serve as the justification of the classification decision given by proposed neuro-fuzzy models. In this thesis seven new algorithms, which provide interesting extensions to some of the existing algorithms are proposed. These methods can be used to solve the pattern classification and recognition problems. For the experimentation with the proposed classification models, standard benchmark datasets from University of California, Irvine, machine learning repository are used. All the input data is normalized by converting it to n-dimensional unit cube, i.e., In. The performance of each model is evaluated by using different performance measures and the results are compared with the existing techniques.

The first is the ProSum algorithm which is an extension to the algorithm proposed by Ghosh, that uses product aggregation reasoning rule. After analyzing the results given by product aggregation reasoning rule, it is found that it gives the misclassification (i) when the patterns are completely overlapping or (ii) the value of the pattern belongingness to a class is zero if any of the feature belongingness value is zero as the product of all feature belongingness is taken to predict the class. So to correct these two problems, the algorithm ProSum is proposed. This algorithm applies product aggregation reasoning rule to all the training samples followed by the proposed sum aggregation reasoning rule to only those samples for which either of (i) or (ii) is true. So, the combination of product aggregation reasoning rule followed by sum aggregation reasoning rule worked well and gives the greater accuracy as compared to either product aggregation reasoning rule or sum aggregation reasoning rule alone. Second algorithm uses the Gaussian membership function to extract the feature belongingness value to each of the given fuzzy classes. The use of the Gaussian membership function captures the variation/dispersion in the data. These fuzzified features values along with the associated class label are then used to train artificial neural network. Artificial neural network gives the output as the pattern belongingness to each of the class. This is the soft decision and it is converted into the hard decision by using the MAX defuzzification operation.

Next algorithm uses the artificial neural network to learn the membership function of fuzzy classes from the data and individual feature membership value is obtained from it. Then the feature selection based on the information gain is performed to select the subset of features that gives the highest classification accuracy. Finally by using defuzzification operation each pattern is assigned to the predicted class. The proposed classification model has exploited the learning capability of artificial neural network and robustness of fuzzy system which has incorporated the imprecision in to the system model. Another method is the extraction of classification rules from modified fuzzy hyperline segment neural network. The basic fuzzy hyperline segment neural network is extended for the number of enhancements like, to work with discrete attribute values, improved membership function and extraction of classification rules from trained neural network. In addition to this method, one more variant of the membership function of fuzzy hyperline segment neural network is proposed and it gives improved classification accuracy as compared to its basic counterpart. One more method called mining classification rules from basic fuzzy min-max neural network is proposed. This method extracts the classification rules from the Simpson's fuzzy min-max neural network.

At last the extraction of classification rules from modified fuzzy min-max neural network is proposed. In this, the membership function of basic fuzzy min-max neural network is modified in order to process the discrete attributes. Overlap test and contraction cases are modified to reduce the misclassification caused due to overlap or contraction step. Also pruning is applied to reduce the number of hyperboxes created after training. Then the classification rules are extracted from these hyperboxes. These rules are analyzed in terms of the parameters like support and error