This page provides the materials and video tutorials for the Markov Chains section of the course. You can increase the quality of the video by clicking on the gear button of the video.
This video was created by Thomas Sharkey. It focuses on modeling a small-scale Chutes and Ladders game (that goes on forever) as a Markov Chain. It discusses the states of the Markov Chain, the transition probability matrix, and formulates the steady-state probability equations. It then solves this set of equations to determine the long-run percentage of time the Markov Chain spends in each state or, equivalently, the long-run percentage of time a player is on a particular space. The problem description is available here: Chutes and Ladders.
This video was created by Thomas Sharkey. It focuses on determining the expected first passage times of various states in the Chutes and Ladders Markov Chain. You can become familiar with this Markov Chain by watching the previous video. The problem description is available here: Chutes and Ladders.
This video was created by Thomas Sharkey. It focuses on modeling the playing of the 17th hole at TPC Sawgrass (the famous island green) as a Markov Chain with absorbing states. We discuss how to formulate and solve the equations associated with ending up in particular absorbing state to determine the likelihood that I will make a par or better on the hole. The problem description is available here: Playing the 17th Hole at TPC Sawgrass. You can also access the probability transition diagram at: Transition Probability Diagram.
This video was created by Thomas Sharkey. It focuses on modeling the market share of a company and its advertising decisions for each state as a Markov Decision Process. It then formulates a linear program in order to solve the MDP and help determine the optimal advertising decisions for each state of the underlying Markov Chain. The problem description is available here: A Market Share and Advertising Problem.