Image courtesy of : Arnold Reinhold [CC BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons
Sign up for piazza here: see piazza (link on canvas).
As Moore's Law has reached and Dennard scaling have run out of gas. This has led to a slow down in the performance of scalar computers per unit energy. Thus, manufacturers have continued the quest for better performance by providing parallel processors.
In this course, we will explore parallel computation. As the topic of parallel computation is very broad, we will look at just a few examples of parallel programming. In particular, we will look at how programming for parallel processors is different than programming from their scalar predecessors.
We will explore a few parallel environments such as OpenMP, MPI (Message Passing Interface), and environments for GPUs (such as CUDA).
You will learn how to program in these different environments, how to think parallel, how to measure performance and efficiency of parallel programs. In addition, you will gain insights into what limits parallel computation.