Reading the Numbers:
How We Test If Our Models Tell the Truth
Reading the Numbers:
How We Test If Our Models Tell the Truth
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 10: Reading the Numbers
In this topic, we apply analytical models to real examples. We learn how to estimate, validate, and interpret the performance of parallel programs. Our goal is to turn theory into insight — to use equations not as decorations but as mirrors reflecting what actually happens when our code runs.
Use analytical models to estimate runtime, speedup, and efficiency.
Validate model predictions using measured or simulated data.
Interpret analytical results to guide algorithmic and architectural improvements.
How can analytical models guide the optimization of a parallel program?
What causes gaps between predicted and actual performance?
How do we decide which model best represents a given system or algorithm?
Quantitative Analysis of Parallel Programs
Modeling Runtime and Speedup
Predicting Scalability Under Various Loads
Platform-Aware Optimization
Modeling Memory Effects
Validating the Model with Real Data
Case Study: Matrix Multiplication
Building the Model
Interpreting the Results
Current Lecture Handout
Reading the Numbers: How We Test If Our Models Tell the Truth, rev 2023*
Note: Links marked with an asterisk (*) lead to materials accessible only to members of the University community. Please log in with your official University account to view them.
The semester at a glance:
Amdahl, G. M. (1967). Validity of the single processor approach to achieving large scale computing capabilities. AFIPS Conference Proceedings, 30, 483–485. https://doi.org/10.1145/1465482.1465560
Balaprakash, P., et al. (2018). Machine learning for performance modeling and auto-tuning in HPC. Proceedings of the IEEE, 106(11), 2104–2119. https://doi.org/10.1109/JPROC.2018.2841200
Marjanović, V., Gioiosa, R., Beltran, V., & Labarta, J. (2010). Overlapping communication and computation by using MPI task-aware runtime. IEEE International Conference on Cluster Computing, 1–10. https://doi.org/10.1109/CLUSTER.2010.19
Shalf, J., et al. (2020). The future of high-performance computing and modeling. IEEE Computer, 53(8), 44–55. https://doi.org/10.1109/MC.2020.2993853
Shan, H., et al. (2020). Characterizing communication performance for exascale applications. Concurrency and Computation: Practice and Experience, 32(1), e5030. https://doi.org/10.1002/cpe.5030
Willcock, J., Hoefler, T., & Lumsdaine, A. (2019). Modeling cache performance in multi-core architectures. ACM Transactions on Architecture and Code Optimization, 16(2), 1–25. https://doi.org/10.1145/3319420
Access Note: Published research articles and books are linked to their respective sources. Some materials are freely accessible within the University network or when logged in with official University credentials. Others will be provided to enrolled students through the class learning management system (LMS).
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 10: Reading the Numbers