A Markov chain is a random process in which each transition to a new state is determined in a "memoryless" manner that only depends on the current state. Markov chains are ubiquitous objects in probability theory that permeate through both pure and applied mathematics. When one runs an irreducible Markov chain for a long time, the distribution of the states will converge to a stationary distribution. This course will develop tools for computing this stationary distribution and estimating how quickly the Markov chain converges to it.
Meeting times: T & TH 10:30-11:45 AM
Contact Information:
Office: SC 235
Email: colindefant@gmail.com
Office Hours: T 12:00-1:00 PM (this might change).
Recorded Lectures
Problem Set 1 TBA
Problem Set 2 TBA
Problem Set 3 TBA
Problem Set 4 TBA
Problem Set 5 TBA
Potential Sources for Final Projects
(You are allowed to choose something that is not from this list.)
TBA