Reading the Future:
How We Predict Parallel Performance
Reading the Future:
How We Predict Parallel Performance
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 09: Reading the Future
In this topic, we learn how to predict how our programs will behave before we even run them. We explore analytical models—mathematical frameworks that estimate performance using a few measurable parameters such as latency, bandwidth, and overhead.
These models help us design and compare algorithms without wasting expensive computing time. They also reveal why some programs slow down as we scale and how to fix those bottlenecks before they happen.
Explain why analytical models are needed for performance prediction.
Describe basic communication cost models such as Hockney and LogP.
Relate model parameters (latency, bandwidth, overhead) to observed performance and scalability.
Why do we model performance when we can already measure it?
How do Hockney and LogP differ in describing communication cost?
How can latency and bandwidth tell us whether our algorithm will scale?
Why We Model Performance
Predicting Without Running
Finding Bottlenecks Before They Happen
Communication Cost Models
The Hockney Model: The Straight Line of Communication
The LogP Model: Adding Realism
Balancing Computation and Communication
The Triangle of Trade-Offs: Overhead, Latency, and Concurrency
Why Modeling Still Matters
Current Lecture Handout
Reading the Future: How We Predict Parallel Performance, rev 2023*
Note: Links marked with an asterisk (*) lead to materials accessible only to members of the University community. Please log in with your official University account to view them.
The semester at a glance:
Bilardi, G., Pietracaprina, A., Pucci, G., Herley, K. T., & Spirakis, P. (1999). BSP versus LogP. Algorithmica, 24(4), 405–422. https://doi.org/10.1007/PL00008270
Culler, D., Karp, R., Patterson, D., Sahay, A., Schauser, K. E., Santos, E., Subramonian, R., & Von Eicken, T. (1993). LogP: Towards a realistic model of parallel computation. ACM SIGPLAN Notices, 28(7), 1–12. https://doi.org/10.1145/173284.155333
Hockney, R. W., & Jesshope, C. R. (1988). Parallel computers 2: Architecture, programming and algorithms. Adam Hilger.
Michalakes, J., et al. (2015). WRF model scaling analysis on modern supercomputers. Journal of Computational Physics, 295, 103–115. https://doi.org/10.1016/j.jcp.2015.03.048
Shalf, J., et al. (2020). HPC cost and energy efficiency analysis of NASA’s Pleiades supercomputer. IEEE Computer, 53(8), 58–67. https://doi.org/10.1109/MC.2020.2993853
Access Note: Published research articles and books are linked to their respective sources. Some materials are freely accessible within the University network or when logged in with official University credentials. Others will be provided to enrolled students through the class learning management system (LMS).
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 09: Reading the Future