Speaking the Same Language:
How Parallel Programs Coordinate and Communicate
Speaking the Same Language:
How Parallel Programs Coordinate and Communicate
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 03: Speaking the Same Language
In this topic, we move from hardware to software—the layer where we, as programmers, tell many processors how to cooperate. We learn that parallel programs are like conversations: sometimes we talk through shared notes, sometimes we send messages, and sometimes we do both.
We study how different programming models manage memory and communication, why synchronization is both necessary and dangerous, and how message passing and shared memory complement each other. By understanding these paradigms, we learn to think not only about doing tasks faster but also about doing them together without chaos.
Identify and explain the main programming models used in parallel computing.
Describe how communication and synchronization happen in different paradigms.
Analyze the trade-offs between abstraction level, performance, and ease of programming.
How do message passing and shared memory differ in program design and speed?
Why do we need synchronization, and why is it hard to get right?
When is it best to combine MPI and OpenMP into one hybrid model?
Two Ways to Share: Memory Models
Shared Memory: One Blackboard for Everyone
Distributed Memory: Passing Messages, Not Notes
Different Paradigms, Different Mindsets
Four Paradigms of Parallel Programming
MPI and OpenMP: Partners in Power
Organizing the Work: Tasks and Communication
Task Decomposition and Process Mapping
Synchronous vs. Asynchronous Communication
Current Lecture Handout
Speaking the Same Language: How Parallel Programs Coordinate and Communicate, rev 2023*
Note: Links marked with an asterisk (*) lead to materials accessible only to members of the University community. Please log in with your official University account to view them.
The semester at a glance:
Bienia, C. (2011). Benchmarking modern multiprocessors. Princeton University.
Dongarra, J., et al. (2022). Report on the Frontier exascale system. International Journal of High Performance Computing Applications, 36(4), 435–451. https://doi.org/10.1177/10943420221110772
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
Gropp, W., Lusk, E., & Skjellum, A. (1999). Using MPI-2: Advanced features of the message-passing interface. MIT Press.
Hennessy, J. L., & Patterson, D. A. (2019). Computer architecture: A quantitative approach (6th ed.). Morgan Kaufmann.
Kunkel, J., Balaprakash, P., & Costan, A. (2020). High-performance computing in the exascale era. Springer.
Pacheco, P. S. (2011). An introduction to parallel programming. Morgan Kaufmann.
Top500. (2021). Fugaku retains top spot on TOP500 list of world’s supercomputers. https://www.top500.org/lists/2021/06/
Zheng, W., et al. (2020). Efficient parallel computing for large-scale protein folding. Bioinformatics, 36(2), 539–545. https://doi.org/10.1093/bioinformatics/btz566
Access Note: Published research articles and books are linked to their respective sources. Some materials are freely accessible within the University network or when logged in with official University credentials. Others will be provided to enrolled students through the class learning management system (LMS).
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 03: Speaking the Same Language