Markov decision process (MDP): Basics of dynamic programming; finite horizon MDP with quadratic cost: Bellman equation, value iteration; optimal stopping problems; partially observable MDP; Infinite horizon discounted cost problems: Bellman equation, value iteration and its convergence analysis, policy iteration and its convergence analysis, linear programming; stochastic shortest path problems; undiscounted cost problems; average cost problems: optimality equation, relative value iteration, policy iteration, linear programming, Blackwell optimal policy; semi-Markov decision process; constrained MDP: relaxation via Lagrange multiplier
Reinforcement learning: Basics of stochastic approximation, Kiefer-Wolfowitz algorithm, simultaneous perturbation stochastic approximation, asynchronous stochastic approximation, two timescale stochastic approximation; Q learning and its convergence analysis, temporal difference learning and its convergence analysis, function approximation techniques, actor-critic algorithms, policy gradient; regret analysis in stationary and non-stationary setting.
"Dynamic programming and optimal control," Vol. 1 & 2, by Dimitri Bertsekas
"Neuro-dynamic programming," by Dimitri Bertsekas and John N. Tsitsiklis
"Stochastic approximation: a dynamical systems viewpoint," by Vivek S. Borkar
"Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods," by S. Bhatnagar, H.L. Prasad and L.A. Prashanth
Selected research papers