Basically classification is a 2-step process, the first step is supervised learning for the sake of the predefined class label for traning data set.Second step is classification accuracy evaluation. Likewise data prediction is also 2-step process.
Before the classification and prediction, something should be done beforehand like data cleaning, relevance analysis, data transfermation and reduction. The most interesting part, to me, is relevance analysis. Many of the attributes in the data may be redundant. Correlation analysis can be used to identify whether any two given attributes are statistically related. If there is strong correlation between attributes A1 and A2, one of them could be removed from further analysis. Because some of them don't contribute to the classification or prediction task.
The concept of the decision tree is pretty easy. The pros of it are
There are three popular attribute selection measures
Details of these methods can be referred to the textbook, from page 306. The source code can be referred to here.
When a decision tree is built, many of the branches will reﬂect anomalies in the training data due to noise or outliers. Tree pruning methods address this problem of overﬁtting the data. more common approach is postpruning. Some methods like cost complexity pruning algorithm, pessimistic pruning. Although pruned trees tend to be more compact than their unpruned counterparts, they may still be rather large and complex. Decision trees can suffer from repetition and replication, making them overwhelming to interpret. One possible solution is to use a different form of knowledge representation, such as rules, instead of decision trees. This is a rule-based classiﬁer.
BOAT (Bootstrapped Optimistic Algorithm for Tree Construction) is a decision tree algorithm that takes a completely different approach to scalability—it is not based on the use of any special data structures. Instead, it uses a statistical technique known as “boot-strapping” (Section 6.13.3 on textbook) to create several smaller samples (or subsets) of the given training data, each of which ﬁts in memory. BOAT usually requires only two scans of D. This is quite an improvement, even in comparison to traditional decision tree algorithms, which require one scan per level of the tree! BOAT was found to be two to three times faster than RainForest, while constructing exactly the same tree. An additional advantage of BOAT is that it can be used for incremental updates. That is, BOAT can take new insertions and deletions for the training data and update the decision tree
Bayesian classiﬁers are statistical classiﬁers. They can predict class membership probabilities, such as the probability that a given tuple belongs to a particular class.
Bayesian classiﬁcation is based on Bayes’ theorem, Studies comparing classiﬁcation algorithms have found a simple Bayesian classiﬁer known as the naive Bayesian classiﬁer to be comparable in performance with decision tree and selected neural network classiﬁers. Bayesian classiﬁers have also exhibited high accuracy and speed when applied to large databases.
Naïve Bayesian classiﬁers assume that the effect of an attribute value on a given class is independent of the values of the other attributes. This assumption is called class conditional independence. It is made to simplify the computations involved and, in this sense, is considered “naïve.” Bayesian belief networks are graphical models, which unlike naïve Bayesian classiﬁers, allow the representation of dependencies among subsets of attributes. Bayesian belief networks can also be used for classiﬁcation.
So here the core formulation is
If Ak is categorical, then P(Xk | Ci ) is the number of tuples of class Ci in D having the value Xk for Ak , divided by the number of tuples of class Ci in D.
If Ak is continuous-valued, then we need to do a bit more work, but the calculation is pretty straightforward. A continuous-valued attribute is typically assumed to have a Gaussian distribution with a mean µ and standard deviation σ, deﬁned by
If the probability values are zero, We can assume that our training database, D, is so large that adding one to each count that we need would only make a negligible difference in the estimated probability value, yet would conveniently avoid the case of probability values of zero. This technique for probability estimation is known as the Laplacian correction or Laplace estimator.
The rule-based classiﬁers learned model is represented as a set of IF-THEN rules. An IF-THEN rule is an expression of the form:
IF condition THEN conclusion.
A rule R can be assessed by its coverage and accuracy. Given a tuple, X, from a class-labeled data set, D, let ncovers be the number of tuples covered by R; ncorrect be the number of tuples correctly classiﬁed by R; and |D| be the number of tuples in D. We can deﬁne coverage accuracy R as
Backpropagation is a neural network learning algorithm. Roughly speaking, a neural network is a set of connected input/output units in which each connection has a weight associated with it. During the learning phase, the network learns by adjusting the weights so as to be able to predict the correct class label of the input tuples. Neural network learning is also referred to as connectionist learning due to the connections between units.
Neural networks involve long training times and are therefore more suitable for applications where this is feasible. They require a number of parameters that are typically best determined empirically, such as the network topology or “structure.” Neural networks have been criticized for their poor interpretability. For example, it is difﬁcult for humans to interpret the symbolic meaning behind the learned weights and of “hidden units” in the network. These features initially made neural networks less desirable for data mining.
Advantages of neural networks, however, include their high tolerance of noisy data as well as their ability to classify patterns on which they have not been trained. They can be used when you may have little knowledge of the relationships between attributes and classes. They are well-suited for continuous-valued inputs and outputs, unlike most decision tree algorithms. They have been successful on a wide array of real-world data, including handwritten character recognition, pathology and laboratory medicine, and training a computer to pronounce English text. Neural network algorithms are inherently parallel; parallelization techniques can be used to speed up the computation process.
In addition, several techniques have recently been developed for the extraction of rules from trained neural networks. These factors contribute toward the usefulness of neural networks for classiﬁcation and prediction in data mining.
The backpropagation algorithm performs learning on a multilayer feed-forward neural network. It iteratively learns a set of weights for prediction of the class label of tuples. A multilayer feed-forward neural network consists of an input layer, one or more hidden layers, and an output layer.
We say that it is a two-layer neural network. (The input layer is not counted because it serves only to pass the input values to the next layer.) Similarly, a network containing two hidden layers is called a three-layer neural network, and so on. The network is feed-forward in that none of the weights cycles back to an input unit or to an output unit of a previous layer. It is fully connected in that each unit provides input to each unit in the next forward layer.
Each output unit takes, as input, a weighted sum of the outputs from units in the previous layer. It applies a nonlinear (activation) function to the weighted input. Multilayer feed-forward neural networks are able to model the class prediction as a nonlinear combination of the inputs. From a statistical point of view, they perform nonlinear regression. Multilayer feed-forward networks, given enough hidden units and enough training samples, can closely approximate any function.
Backpropagation works like this, Backpropagation learns by iteratively processing a data set of training tuples, comparing the network’s prediction for each tuple with the actual known target value. The target value may be the known class label of the training tuple (for classiﬁcation problems) or a continuous value (for prediction). For each training tuple, the weights are modiﬁed so as to minimize the mean squared error between the network’s prediction and the actual target value. These modiﬁcations are made in the “backwards” direction, that is, from the output layer, through each hidden layer down to the ﬁrst hidden layer (hence the name backpropagation). Although it is not guaranteed, in general the weights will eventually converge, and the learning process stops. The steps involved are expressed in terms of inputs, outputs, and errors, and may seem awkward if this is your ﬁrst look at neural network learning.
The detailed procedure is described in textbook's page 330. And one good example on page 333.
A support vector machine (or SVM) is an algorithm that works as follows. It uses a nonlinear mapping to transform the original training data into a higher dimension. Within this new dimension, it searches for the linear optimal separating hyperplane (that is, a “decision boundary” separating the tuples of one class from another). With an appropriate nonlinear mapping to a sufﬁciently high dimension, data from two classes can always be separated by a hyperplane. The SVM ﬁnds this hyperplane using support vectors (“essential” training tuples) and margins (deﬁned by the support vectors).
The training time of even the fastest SVMs can be extremely slow, they are highly accurate, owing to their ability to model complex nonlinear decision boundaries. They are much less prone to overﬁtting than other methods. The support vectors found also provide a compact description of the learned model. SVMs can be used for prediction as well as classiﬁcation.
If the data is linearly separable, then just do it, if the data is linearly inseparable, there are two main steps to deal with it. First, transform the original input data into a higher dimensional space using a nonlinear mapping. Several common nonlinear mappings can be used in this step. Once the data have been transformed into the new higher space, the second step searches for a linear separating hyperplane in the new space.
The detailed information can be referred to textbook from page 335. This is too complicate to explain here.
All the classification method discussed so far are all examples of eager learners. Eager learners, when given a set of training tuples, will construct a generalization (i.e., classiﬁcation) model before receiving new (e.g., test) tuples to classify. We can think of the learned model as being ready and eager to classify previously unseen tuples. Imagine a contrasting lazy approach, in which the learner instead waits until the last minute before doing any model construction in order to classify a given test tuple. That is, when given a training tuple, a lazy learner simply stores it (or does only a little minor processing) and waits until it is given a test tuple. Only when it sees the test tuple does it perform generalization in order to classify the tuple based on its similarity to the stored training tuples. Unlike eager learning methods, lazy learners do less work when a training tuple is presented and more work when making a classiﬁcation or prediction. Because lazy learners store the training tuples or “instances,” they are also referred to as instance-based learners, even though all learning is essentially based on instances.
When making a classiﬁcation or prediction, lazy learners can be computationally expensive. They require efﬁcient storage techniques and are well-suited to implementation on parallel hardware. They offer little explanation or insight into the structure of the data. Lazy learners, however, naturally support incremental learning. They are able to model complex decision spaces having hyperpolygonal shapes that may not be as easily describable by other learning algorithms (such as hyper-rectangular shapes modeled by decision trees).
One popular example method is k-Nearest-Neighbor Classiﬁers. Nearest-neighbor classiﬁers are based on learning by analogy, that is, by comparing a given test tuple with training tuples that are similar to it. The training tuples are described by n attributes. Each tuple represents a point in an n-dimensional space. In this way, all of the training tuples are stored in an n-dimensional pattern space. When given an unknown tuple, a k-nearest-neighbor classiﬁer searches the pattern space for the k training tuples that are closest to the unknown tuple. These k training tuples are the k “nearest
neighbors” of the unknown tuple. The unknown tuple is assigned the most common class among its k nearest neighbors.
Nearest-neighbor classiﬁers can also be used for prediction, that is, to return a real-valued prediction for a given unknown tuple. In this case, the classiﬁer returns the average value of the real-valued labels associated with the k nearest neighbors of the unknown tuple.
Nearest-neighbor classiﬁers use distance-based comparisons that intrinsically assign equal weight to each attribute. They therefore can suffer from poor accuracy when given noisy or irrelevant attributes. The method, however, has been modiﬁed to incorporate attribute weighting and the pruning of noisy data tuples. The choice of a distance metric can be critical. The Manhattan (city block) distance or other distance measurements, may also be used. Nearest-neighbor classiﬁers can be extremely slow when classifying test tuples.
In general, genetic learning starts as follows. An initial population is created consisting of randomly generated rules. Each rule can be represented by a string of bits. As a simple example, suppose that samples in a given training set are described by two Boolean attributes, A1 and A2 , and that there are two classes, C1 and C2 . The rule “IF A1 AND NOT A2 THEN C2 ” can be encoded as the bit string “100,” where the two leftmost bits represent attributes A1 and A2 , respectively, and the rightmost bit represents the class. Similarly, the rule “IF NOT A1 AND NOT A2 THEN C1 ” can be encoded as “001.” If an attribute has k values, where k > 2, then k bits may be used to encode the attribute’s values. Classes can be encoded in a similar fashion. Based on the notion of survival of the ﬁttest, a new population is formed to consist of the ﬁttest rules in the current population, as well as offspring of these rules. Typically, the ﬁtness of a rule is assessed by its classiﬁcation accuracy on a set of training samples. Offspring are created by applying genetic operators such as crossover and mutation. In crossover, substrings from pairs of rules are swapped to form new pairs of rules. In mutation, randomly selected bits in a rule’s string are inverted. The process of generating new populations based on prior populations of rules continues until a population, P, evolves where each rule in P satisﬁes a prespeciﬁed ﬁtness threshold.
Genetic algorithms are easily parallelizable and have been used for classiﬁcation as well as other optimization problems. In data mining, they may be used to evaluate the ﬁtness of other algorithms.
Numeric prediction is the task of predicting continuous (or ordered) values for given input. Some classiﬁcation techniques (such as backpropagation, support vector machines, and k-nearest-neighbor classiﬁers) can be adapted for prediction. Many problems can be solved by linear regression, and even more can be tackled by applying transformations to the variables so that a nonlinear problem can be converted to a linear one.
Straight-line regression analysis involves a response variable, y, and a single predictor variable, x.
These coefﬁcients can be solved for by the method of least squares, which estimates the best-ﬁtting straight line as the one that minimizes the error between the actual data and the estimate of the line.
Decision tree induction can be adapted so as to predict continuous (ordered) values, rather than class labels. There are two main types of trees for prediction—regression trees and model trees. Each regression tree leaf stores a continuous-valued prediction, which is actually the average value of the predicted attribute for the training tuples that reach the leaf. Since the terms “regression” and “numeric prediction” are used synonymously in statistics, the resulting trees were called “regression trees,” even though they did not use any regression equations. By contrast, in model trees, each leaf holds a regression model—a multivariate linear equation for the predicted attribute. Regression and model trees tend to be more accurate than linear regression when the data are not represented well by a simple linear model.
The true positives, true negatives, false positives, and false negatives are also useful in assessing the costs and beneﬁts (or risks and gains) associated with a classiﬁcation model. The cost associated with a false negative is far greater than that of a false positive. In such cases, we can outweigh one type of error over another by assigning a different cost to each.
For predictor, the test error (rate), or generalization error, is the average loss over the test set. Thus, we get the following error rates.
Sometimes, we may want the error to be relative to what it would have been if we had just predicted y, the mean value for y from the training data, D. That is, we can normalize the total loss by dividing by the total loss incurred from always predicting the mean. Relative measures of error include:
How can we use the above measures to obtain a reliable estimate of classiﬁer accuracy (or predictor accuracy in terms of error)? Holdout, random subsampling, cross-validation, and the bootstrap are common techniques for assessing accuracy based on randomly sampled partitions of the given data. The use of such techniques to estimate accuracy increases the overall computation time, yet is useful for model selection.
the given data are randomly partitioned into two independent sets, a training set and a test set. Typically, two-thirds of the data are allocated to the training set, and the remaining one-third is allocated to the test set. The training set is used to derive the model, whose accuracy is estimated with the test set. The estimate is pessimistic because only a portion of the initial data is used to derive the model.
Random subsampling is a variation of the holdout method in which the holdout method is repeated k times. The overall accuracy estimate is taken as the average.
In k-fold cross-validation, the initial data are randomly partitioned into k mutually exclusive subsets, Training and testing is performed k times. In iteration i, partition Di is reserved as the test set, and the remaining partitions are collectively used to train the model. Here, each sample is used the same number of times for training and once for testing. For classiﬁcation, the accuracy estimate is the overall number of correct classiﬁcations from the k iterations, divided by the total number of tuples in the initial data. For prediction, the error estimate can be computed as the total loss from
the k iterations, divided by the total number of initial tuples.
In general, stratiﬁed 10-fold cross-validation is recommended for estimating accuracy (even if computation power allows using more folds) due to its relatively low bias and variance.
The bootstrap method samples the given training tuples uniformly with replacement. That is, each time a tuple is selected, it is equally likely to be selected again and readded to the training set.
There are several bootstrap methods. A commonly used one is the .632 bootstrap, which works as follows. Suppose we are given a data set of d tuples. The data set is sampled d times, with replacement, resulting in a bootstrap sample or training set of d samples. It is very likely that some of the original data tuples will occur more than once in this sample. The data tuples that did not make it into the training set end up forming the test set. Suppose we were to try this out several times. As it turns out, on average, 63.2% of the original data tuples will end up in the bootstrap, and the remaining 36.8% will form the test set
The overall accuracy of the model is then estimated as:
The first Acc for test is the accuracy of the model obtained with bootstrap sample i when it is applied to test set i. The second Acc for training is the accuracy of the model obtained with bootstrap sample i when it is applied to the original set of data tuples.
Increasing model accuracy: Bagging and boosting, each generate a set of classiﬁcation or prediction models, M1,M2,...,Mk. Voting strategies are used to combine the predictions for a given unknown tuple.
The basic procedure of bagging is like this:
In boosting, weights are assigned to each training tuple. A series of k classiﬁers is iteratively learned. After a classiﬁer Mi is learned, the weights are updated to allow the subsequent classiﬁer, Mi+1 , to “pay more attention” to the training tuples that were misclassiﬁed by Mi . The ﬁnal boosted classiﬁer, M∗, combines the votes of each individual classiﬁer, where the weight of each classiﬁer’s vote is a function of its accuracy. The boosting algorithm can be extended for the prediction of continuous values.
Adaboost is the representative work:
Because of the way boosting focuses on the misclassiﬁed tuples, it risks overﬁtting the resulting composite model to such data. Therefore, sometimes the resulting “boosted” model may be less accurate than a single model derived from the same data. Bagging is less susceptible to model overﬁtting. While both can signiﬁcantly improve accuracy in comparison to a single model, boost- ing tends to achieve greater accuracy.
Suppose that we have 2 models, We have performed 10-fold cross-validation to obtain a mean error rate for each. How can we determine which model is best? It may seem intuitive to select the model with the lowest error rate, however, the mean error rates are just estimates of error on the true population of future data cases. There can be considerable variance between error rates within any given 10-fold cross-validation experiment. Although the mean error rates obtained for M1 and M2 may appear different, that difference may not be statistically signiﬁcant.
An ROC curve for M is plotted as follows. Starting at the bottom left-hand corner (where the true positive rate and false-positive rate are both 0), we check the actual class label of the tuple at the top of the list. If we have a true positive (that is, a positive tuple that was correctly classiﬁed), then on the ROC curve, we move up and plot a point. If, instead, the tuple really belongs to the ‘no’ class, we have a false positive. On the ROC curve, we move right and plot a point. This process is repeated for each of the test tuples, each time moving up on the curve for a true positive or toward the right for a false positive. Figure shows the ROC curves of two classiﬁcation models. The plot also shows a diagonal line where for every true positive of such a model, we are just as likely to encounter a false positive. Thus, the closer the ROC curve of a model is to the diagonal line, the less accurate the model. If the model is really good, initially we are more likely to encounter true positives as we move down the ranked list. Thus, the curve would move steeply up from zero. Later, as we start to encounter fewer and fewer true positives, and more and more false positives, the curve cases off and becomes more horizontal.
To assess the accuracy of a model, we can measure the area under the curve. Several software packages are able to perform such calculation. The closer the area is to 0.5, the less accurate the corresponding model is. A model with perfect accuracy will have an area of 1.0.
In data mining practice, we may often employ a single test set, that is, the same test set can be used for both M1 and M2 . In such cases, we do a pairwise comparison of the two models for each 10-fold cross-validation round. That is, for the ith round of 10-fold cross-validation, the same cross-validation partitioning is used to obtain an error rate. The t -test computes the t-statistic with k − 1 degrees of freedom for k samples. For example we have k = 10, since, here, the k samples are our error rates obtained from ten 10-fold cross-validations for each model. The t -statistic for pairwise comparison is computed as follows:
To determine whether M1 and M2 are signiﬁcantly different, we compute t and select a signiﬁcance level, sig. In practice, a signiﬁcance level of 5% or 1% is typically used. We then consult a table for the t distribution, available in standard textbooks on statistics. This table is usually shown arranged by degrees of freedom as rows and signiﬁcance levels as columns. Suppose we want to ascertain whether the difference between M1 and M2 is signiﬁcantly different for 95% of the population, that is, sig = 5% or 0.05. We need to ﬁnd the t distribution value corresponding to k − 1 degrees of freedom (or 9 degrees of freedom for our example) from the table. However, because the t distribution is symmetric, typically only the upper percentage points of the distribution are shown. Therefore, we look up the table value for z = sig/2, which in this case is 0.025, where z is also referred to as a conﬁdence limit. If t > z or t < −z, then our value of t lies in the rejection region, within the tails of the distribution. This means that we can reject the null hypothesis that the means of M1 and M2 are the same and conclude that there is a statistically signiﬁcant difference between the two models. Otherwise, if we cannot reject the null hypothesis, we then conclude that any difference between M1 and M2 can be attributed to chance.
If two test sets are available instead of a single test set, then a nonpaired version of the t -test is used, where the variance between the means of the two models is estimated as