Part 1: Basic Machine Learning
Overview and Perceptron
Regression and Gradient Descent
Support Vector Machines: Hard and Soft Margin SVMs
Linear Algebra and Eigenvector Crash Course
Principle Component Analysis
Logistic Regression
Maximum Likelihood Estimation
Introduction to Neural Networks and Back Propagation
Part 2: Neural Networks and Deep Learning
Introduction to Neural Networks and Back Propagation
Multi-class Classification and Softmax
Regularization
Mini-Batch Gradient Descent
Introduction to RNNs
Unbalanced Data and Performance Measures
Recommended Andrew Ng DeepLearning (Course 1) videos:
Why is Deep Learning Taking Off?
Vectorization
More Vectorization Examples
Activation Functions
Why Non-Linear Activation Functions
Deep L-layer Neural Networks
What Does This Have to Do with the Brain?
Recommended Andrew Ng DeepLearning (Course 2) videos:
Basic Recipe for Machine Learning (ie, how to reduce training error and validation error)
Regularization (to reduce overfitting and validation error)
Why regularization reduces overfitting?
Other regularization methods
Minibatch Gradient Descent
Understanding Mini-batch Gradient Descent
Convolutional Networks and Computer Vision: Recommended Andrew Ng Videos (Course 4):
Computer Vision
Edge Detection Examples
Padding
Strided Convolutions
Convolutions Over Volumes
One Layer of a Convolutional Network
Simple Convolutional Network Example
Pooling Layers
CNN Example
Why Convolutions
Classic Networks
Transfer Learning
What is Neural Style Transfer
What are Deep Convolutional Networks Learning?
Cost function?
Context Cost Function
Recurrent Neural Networks and Natural Language Processing (Course 5)
Why Sequence Models
Recurrent Neural Network Model
Different Types of RNNs
Sequence-to-sequence models: Basic Models
---------------------------------------------------------------------------------------------------------
Recommended (but not tested):
Exponential Weighted Averages
Understanding Exponential Weighted Averages
Bias Correction of Exponential Moving Averages
Gradient Descent with Momemtum
RMSProp
Adam Optimization Method
Normalizing Activations in a Network
The problem with local optima
Some 2017 Material (for reference purposes)
Sentiment Analysis
Recommendation Systems
Clustering: K-Means (Andrew Ng Slides)
Deep Reinforcement Learning: Policy Gradient
..........