Breaking Problems Apart:
The Art of Designing Parallel Algorithms
Breaking Problems Apart:
The Art of Designing Parallel Algorithms
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 05: Breaking Problems Apart
In this topic, we learn how to think like parallel problem-solvers. We move beyond writing code and instead focus on how to design algorithms that make full use of many processors. We learn how to split a big task into smaller ones, assign them wisely, and let them communicate efficiently. We study how to find the balance between independence and cooperation—between dividing work and bringing it back together.
We explore decomposition, mapping, and interaction patterns—the building blocks of all parallel algorithms. These ideas help us turn messy, sequential logic into structured teamwork that computers can execute efficiently.
Break down computational problems into independent parallel tasks.
Build effective data and task decompositions that reduce dependency and communication.
Apply mapping and balancing techniques that keep processors equally busy and minimize waiting time.
What makes one way of decomposition better than another?
How do dependencies between tasks slow down the team?
How can we design a mapping that keeps all processors working together efficiently?
Decomposition Techniques
How We Break Problems: Recursive, Data, and Task Decomposition
Finding Independence and Choosing Granularity
Mapping Tasks to Processes
Static and Dynamic Mapping
Load Balancing and Communication Minimization
Interaction Patterns and Task Graphs
Seeing Dependencies through Task Graphs
Quantifying Communication Overhead
Current Lecture Handout
Breaking Problems Apart: The Art of Designing Parallel Algorithms, rev 2023*
Note: Links marked with an asterisk (*) lead to materials accessible only to members of the University community. Please log in with your official University account to view them.
The semester at a glance:
Baker, C., Chaudhuri, A., & Kale, L. V. (2021). Dynamic load balancing in Charm++ for exascale applications. Concurrency and Computation: Practice and Experience, 33(21), e6379. https://doi.org/10.1002/cpe.6379
Besta, M., et al. (2021). Communication-efficient graph algorithms on distributed-memory systems. Proceedings of the ACM/IEEE International Conference for High Performance Computing (SC21). https://doi.org/10.1145/3458817.3480841
Blumofe, R. D., & Leiserson, C. E. (1999). Scheduling multithreaded computations by work stealing. Journal of the ACM, 46(5), 720–748. https://doi.org/10.1145/324133.324234
Palmer, T. N., et al. (2022). High-resolution climate modeling in the exascale era. Nature Climate Change, 12(3), 198–207. https://doi.org/10.1038/s41558-022-01290-2
Pacheco, P. S. (2011). An introduction to parallel programming. Morgan Kaufmann.
Pinedo, M. L. (2022). Scheduling: Theory, algorithms, and applications (6th ed.). Springer.
Plimpton, S., et al. (2020). Efficient molecular dynamics simulations with topology-aware task mapping. Computer Physics Communications, 256, 107437. https://doi.org/10.1016/j.cpc.2020.107437
Access Note: Published research articles and books are linked to their respective sources. Some materials are freely accessible within the University network or when logged in with official University credentials. Others will be provided to enrolled students through the class learning management system (LMS).
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 05: Breaking Problems Apart