Basic Principles: Introduction, The concept learning task. General-to-specific ordering of hypotheses. Vapnik - Chervonenkis dimension. Experimental Evaluation: Over-fitting, Cross-Validation. Loss Regularization Framework for Classification.
Supervised Learning: Decision Tree Learning. Instance-Based Learning: k-Nearest neighbour algorithm, Support Vector Machines. Ensemble learning: boosting, bagging, random forests. Artificial Neural Networks: Linear threshold units, Perceptrons, Multilayer networks and back-propagation.
Probabilistic Models: Maximum Likelihood Estimation, MAP, Bayes Classifiers: Naive Bayes. Bayes optimal classifiers. Minimum description length principle. Bayesian Networks, Inference in Bayesian Networks, Bayes Net Structure Learning.
Unsupervised Learning: K-means and Hierarchical Clustering, Gaussian Mixture Models, EM algorithm, Hidden Markov Models. Dimensionality Reduction: PCA and kernel PCA.
Computational Learning Theory: probably approximately correct (PAC) learning. Sample complexity. Computational complexity of training.
Learning Methodologies: Reinforcement Learning, Representation Learning, Semi-supervised Learning, Active Learning.
Tom Mitchell. Machine Learning. McGraw Hill, 1997.
Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer 2006.
Richard O. Duda, Peter E. Hart, David G. Stork. Pattern Classification. John Wiley & Sons, 2006.
Kevin Murphy, Machine Learning: A Probabilistic Perspective, MIT Press, 2012
Hal Daumé III, A Course in Machine Learning, 2015.
Trevor Hastie, Robert Tibshirani, Jerome Friedman, The Elements of Statistical Learning, Springer 2009.