The Big Picture:
How It All Connects in Parallel Thinking
The Big Picture:
How It All Connects in Parallel Thinking
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 14: The Big Picture
This final topic brings everything together — the ideas, the models, and the mindset. We now see how architecture, algorithms, and analysis form one continuous cycle of design. Parallel computing is not only about writing faster code; it’s about understanding how systems think, how data moves, and how ideas scale.
Combine the key principles of architecture, algorithm design, and performance modeling.
Recognize emerging trends in high-performance and distributed computing.
Reflect on how parallel computing shapes modern research and engineering.
How have new architectures like GPUs and cloud systems changed parallel computing?
What timeless ideas remain true even as technology evolves?
How can our understanding of parallelism help us design or study new systems?
Reconnecting the Core Concepts
The Two Worlds of Architecture
The Strategy Behind Good Algorithms
Where We Are Headed
From Cores to Clouds: The New Architectures
Rethinking Programming Models
Seeing the Big Picture
How Everything Fits Together
Looking Forward: Research and Impact
Current Lecture Handout
The Big Picture: How It All Connects in Parallel Thinking, rev 2023*
Note: Links marked with an asterisk (*) lead to materials accessible only to members of the University community. Please log in with your official University account to view them.
The semester at a glance:
Bautista-Gomez, L., Tsuboi, S., Komatitsch, D., Cappello, F., Matsuoka, S., & Maruyama, N. (2016). FTI: High performance fault tolerance interface for hybrid systems. Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (SC16), 1–12. https://doi.org/10.1109/SC.2016.10
Carlson, J., Bachan, J., & Bonachea, D. (2019). UPC++: A high-performance PGAS library for C++. Proceedings of the ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming, 1–11. https://doi.org/10.1145/3315454.3329961
Dongarra, J., et al. (2022). Report on the 2022 exascale computing project. Communications of the ACM, 65(11), 70–82. https://doi.org/10.1145/3554380
Gao, X., Lu, X., & Panda, D. K. (2021). Performance analysis of cloud-based HPC frameworks for data-intensive applications. Journal of Cloud Computing, 10(1), 1–14. https://doi.org/10.1186/s13677-021-00251-0
Grama, A., Gupta, A., Karypis, G., & Kumar, V. (2003). Introduction to parallel computing (2nd ed.). Addison-Wesley.
Mittal, S., & Vetter, J. S. (2015). A survey of CPU-GPU heterogeneous computing techniques. ACM Computing Surveys, 47(4), 1–35. https://doi.org/10.1145/2788396
Narayanan, D., et al. (2021). Efficient large-scale language model training on GPU clusters. Proceedings of the USENIX Symposium on Operating Systems Design and Implementation (OSDI 21), 1–18.
Nickolls, J., Buck, I., Garland, M., & Skadron, K. (2008). Scalable parallel programming with CUDA. ACM Queue, 6(2), 40–53. https://doi.org/10.1145/1365490.1365500
Access Note: Published research articles and books are linked to their respective sources. Some materials are freely accessible within the University network or when logged in with official University credentials. Others will be provided to enrolled students through the class learning management system (LMS).
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 14: The Big Picture