Home

Machine Learning is a young field concerned with developing, analyzing, and applying algorithms for learning from data. Advances in Machine Learning have revolutionized a number of huge fields, including: computer vision, data mining, robotics, automated control, and even cognitive science.

In this reading group w

e will study the basic mathematics and computational techniques that enable computers to learn from data. The aim of this reading group will be to expose curious students to machine learning, arm students with machine learning tools to allow them to solve problems in other areas of engineering, and provide students with a window into what doing machine learning research is really like. Students should bring their own background, interests, passions, questions, and insights to the group.

There are no prerequisite skills that you must possess before joining the group. The only requirement is that you are willing to devote sufficient time to reading the papers so that you can start to build your own understanding and also contribute to the learning of others in the group.

Rough Reading Group Structure

I use the word "rough" here, because this is only an initial framing. The important thing is to develop a reading group that people are happy with, so this structure is not at all set in stone. That being said, here is our initial thinking for how the group would work.

In the first meeting of the reading group we will focus on getting a feel for machine learning at a very high-level, including: the basic assumptions that make learning possible, an overview of the different learning settings, and a quick overview of some state-of-the-art machine learning research.

Once we have a basic idea of what machine learning is, we will delve into some of the most fundamental algorithms for machine learning. Each week we will study a different algorithm. A small group of students will lead the discussion and help plan each reading group session. The duties of this small group, will be to choose papers for the larger group to read, lead a mini-lecture to explain how the algorithm works, and discuss interesting applications of the algorithm. It is also expected that each team create an implementation of the algorithm so that they can gain a better understanding of how the algorithm works (and hopefully demonstrate their algorithm to the class). These presentations are intended to be interactive, and it is totally fine (and completely expected!) if the group presenting doesn't have the answers to every question. Paul Ruvolo, as the faculty advisor, will be available to meet with each team before they present the material to the reading group. The purpose of this meeting will be to clarify the material, discuss the best strategy for explaining the content, and decide on a final list of papers for the group to read.

A preliminary list of algorithms that we would cover during this initial phase is:

    1. Linear Perceptron

    2. Multivariate Linear Regression

    3. Logistic Regression

    4. Support Vector Machines

    5. Adaboost

    6. Principal Component Analysis

    7. K-Means

Once we have some of the basics down, we will begin to read contemporary machine learning research papers. In this phase of the group, I expect that the individual passions and interests of the students in the group will be essentially to driving the selection of the papers that we discuss. These sessions will likely also spawn creative ideas for student research projects that may prove exciting to investigate further.

Some Seminal Papers on Machine Learning (these would all be good candidates for the reading group)

  1. "A Few Useful Things to Know About Machine Learning", Domingos

  2. "Experiments with a New boosting Algorithm", Freund and Schapire

  3. "Robust Real-Time Face Detection", Viola and Jones

  4. "The independent components of natural scenes are edge filters", Bell and Sejnowski

  5. "Neural Network for Robot Driving", Pomerleau

  6. "Learning Representations by Back-Propagating Errors", Rumelhart, Hinton, and Williams

  7. "The Perceptron--a perceiving and recognizing automaton", Rosenblatt

  8. "Nonlinear Dimensionality Reduction by Locally Linear Embedding", Roweis and Saul

  9. "Temporal Difference Learning and TD-Gammon", Tesauro

  10. "On computable numbers, with an application to the Entscheidungs Problem", Alan Turing

    1. "A training algorithm for optimum margin classifiers", Boser, Guyon, and Vapnik

    2. "Learning to predict by the method of Temporal difference", Sutton