CMSC 280: Parallel Processing
CMSC 280: Parallel Processing
::: Home > Instruction > CMSC 280: Parallel Processing
Parallel processing lies at the heart of today’s scientific computing, data analytics, and artificial intelligence. As our problems grow in complexity and our data expands in scale, we quickly realize that sequential computation can no longer keep pace with the demands of modern research and industry. In this course, we build upon what we learned in CMSC 180 (Introduction to Parallel Computing) and take the next step—exploring advanced algorithmic strategies and applications that power large-scale, high-performance systems.
Together, we will parallelize computationally intensive problems across diverse domains such as numerical linear algebra, graph analytics, combinatorial optimization, dynamic programming, and spectral methods. Through these algorithmic classes, we will learn to navigate the delicate balance between computation and communication, understand how algorithm structure interacts with hardware architecture, and critically evaluate how well our solutions scale across distributed and shared-memory systems.
Our goal is not merely to master the syntax of parallel programming, but to cultivate a new way of thinking—one that views every complex problem as a collection of concurrent activities, carefully coordinated, synchronized, and optimized to harness the full power of modern computing platforms.
This page presents the Course Pack for CMSC 280, which includes the course learning outcomes, the topics and activities that define our semester’s journey, links to course guides from past offerings, and references that anchor our discussions in both classical and contemporary literature. Each topic is designed to bring us closer to the mindset of an HPC researcher — one who can think critically about performance, design experiments, and push computational limits with purpose and precision.
On this page we will find:
The course learning outcomes
The list of advanced topics and activities
Links to previous course guides and reading materials
Some instructional and academic materials on this site are reserved for members of the University community. Links marked with an asterisk (*) point to these restricted resources. If you are browsing from outside the University network or are not signed in with your official University account, you may encounter an access restriction or an error message (such as the familiar “404 – Page Not Found”). To view these materials, please ensure that you are logged in through your University credentials.
Parallel processing stands at the frontier of computational performance, where creativity meets efficiency and theory meets engineering. In this course, we push beyond learning how to write parallel code—we learn how to think in parallel, how to evaluate architectures, and how to innovate algorithms that scale with tomorrow’s computing challenges. Following the ACM/IEEE-CS recommendations for advanced study in Parallel and Distributed Computing (PDC), this course deepens our understanding of performance optimization, load balancing, and scalable design for large-scale scientific and data-intensive problems.
By the end of the course, we should be able to:
Critically Analyze and Compare Parallel Algorithms – Evaluate the strengths, weaknesses, and scalability of various parallel algorithms (e.g., sorting, matrix multiplication, graph algorithms) in relation to specific problem domains and computational architectures. Students will be able to justify algorithmic choices based on performance metrics and architectural constraints.
Innovate and Propose Optimizations for Existing Algorithms – Identify gaps or inefficiencies in traditional parallel algorithms and design novel solutions or optimizations to address these issues, focusing on practical applications and emerging hardware (e.g., GPUs, cloud, distributed systems).
Master Advanced Parallel Programming Techniques – Implement and optimize advanced parallel algorithms (dense matrix algorithms, graph algorithms, dynamic programming, FFT) using industry-standard libraries (e.g., MPI, OpenMP, CUDA) while ensuring performance scalability and load balancing.
Evaluate and Interpret Parallel Algorithm Performance – Use theoretical performance models (e.g., Amdahl’s law, isoefficiency) and empirical benchmarking to assess the efficiency and scalability of parallel algorithms, interpreting results to propose further optimizations.
Collaborate and Communicate Complex Concepts Effectively – Work both independently and collaboratively to solve complex parallel processing problems, actively participating in class discussions, group projects, and peer reviews to communicate, critique, and refine algorithmic ideas.
At the frontier of high-performance computing, each topic we study represents not just a lesson—but a research direction. The areas listed below outline the core themes of CMSC 280, where we examine how parallelism interacts with scalability, communication, and architecture. Each entry links to its own Topic Pack, containing detailed readings and lecture handouts that invite us to analyze, experiment, and innovate.
These topics collectively form the intellectual map of CMSC 280—a course where we no longer just apply parallel concepts, but begin to expand them toward new discoveries in performance, efficiency, and design.
Read the introductory handout: Introduction to Parallel Processing*
Read the concluding handout: Reflections and Futures in Parallel Processing*
Each semester of CMSC 280 leaves a distinctive imprint—a reflection of how our understanding of parallel processing continues to evolve alongside the technologies that power it. The course guides listed below document our collective journey through shifting architectures, emerging algorithms, and evolving paradigms of high-performance computing.
These records, beginning from Academic Year (AY) 2022–2023, illustrate how we have progressively refined both our pedagogy and our practice. Course guides from earlier offerings are currently being prepared for release, soon to provide a fuller picture of how CMSC 280 has adapted through time—growing from a technical course into a living chronicle of research-driven teaching.
Note: Links marked with an asterisk (*) lead to materials accessible only to members of the University community. Please log in with your official University account to view them.
In CMSC 280, we move beyond learning how to parallelize—we learn how to innovate in parallelism. This course challenges us to think like researchers, not just programmers. Together, we will explore open-ended, research-grade problems that demand both creativity and rigor—from matrix computations and graph analytics to dynamic programming and large-scale optimization.
By the end of the course, we won’t just be implementing parallel solutions; we’ll be pushing the boundaries of computational performance itself. We will learn to question assumptions, design experiments, and discover new ways to make computation faster, smarter, and more scalable. In doing so, we take part in the continuing story of how parallel thinking drives innovation in high-performance computing.
Read more: CMSC 180: Introduction to Parallel Computing
::: Home > Instruction > CMSC 280: Parallel Processing