Making It All Work Together:
The Midterm Integration of Parallel Thinking
Making It All Work Together:
The Midterm Integration of Parallel Thinking
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 08: Making It All Work Together
This topic is about connecting the dots. We review everything we have learned—from architectures and programming models to algorithm design and performance metrics—and see how these pieces fit together into a coherent way of thinking about parallel computing.
We do not just memorize formulas; we use them to understand why programs behave the way they do. We see how small decisions—like how to decompose data or choose communication models—affect scalability, efficiency, and cost. Finally, we practice analyzing and diagnosing parallel performance as real engineers would.
Integrate architectural, algorithmic, and performance concepts into a unified understanding.
Apply theoretical models to evaluate efficiency, scalability, and cost-optimality.
Solve analytical problems that connect communication, synchronization, and computation models.
How do architecture, algorithm design, and performance metrics connect in practice?
What makes a parallel program efficient—or wasteful?
Why do simple ideas like latency and load balance become so important as we scale?
Seeing the Whole System
Revisiting Architecture, Algorithms, and Communication
Mapping from Problem to Parallel Solution
Evaluating and Diagnosing Performance
Using Performance Models: Amdahl, Gustafson, and Isoefficiency
Learning from Case Studies
Synthesizing Ideas for the Midterm
The Core Equations We Live By
Practice and Peer Integration
Current Lecture Handout
Making It All Work Together: The Midterm Integration of Parallel Thinking, rev 2023*
Note: Links marked with an asterisk (*) lead to materials accessible only to members of the University community. Please log in with your official University account to view them.
The semester at a glance:
Choquette, J., et al. (2021). NVIDIA A100 Tensor Core GPU architecture. IEEE Micro, 41(2), 46–55. https://doi.org/10.1109/MM.2021.3051625
Dinan, J., et al. (2017). Scalable collective communication for extreme-scale systems. The International Journal of High Performance Computing Applications, 31(4), 382–396. https://doi.org/10.1177/1094342016646848
Grama, A., Gupta, A., Karypis, G., & Kumar, V. (2003). Introduction to parallel computing (2nd ed.). Addison-Wesley.
Palmer, T. N., et al. (2022). High-resolution climate modeling in the exascale era. Nature Climate Change, 12(3), 198–207. https://doi.org/10.1038/s41558-022-01290-2
Access Note: Published research articles and books are linked to their respective sources. Some materials are freely accessible within the University network or when logged in with official University credentials. Others will be provided to enrolled students through the class learning management system (LMS).
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 08: Making It All Work Together