CMSC 180: Introduction to Parallel Computing
CMSC 180: Introduction to Parallel Computing
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing
Welcome to the Course Pack for CMSC 180: Introduction to Parallel Computing, a core required course in the Bachelor of Science in Computer Science curriculum at UPLB. In line with the curriculum guidelines of the Association for Computing Machinery / IEEE Computer Society which recognize the growing importance of parallel and distributed computing in undergraduate education, this course prepares us to think and build systems that compute at scale.
On this page we will find:
The learning outcomes for the course
The main topics and hands-on activities we will engage in
A collection of recent course guides documenting the semester offerings
A curated list of references to deepen our understanding
Parallel computing sits at the very heart of modern computational science. As systems grow larger, data more complex, and algorithms more demanding, our ability to think in parallel becomes the new literacy of computing. In this course, we will learn not only how to divide problems among processors but also how to think critically about scalability, efficiency, and correctness. Following the recommendations of the ACM/IEEE Computer Science Curricula* (ACM/IEEE-CS, 2013), this course equips students with the conceptual and practical foundations of Parallel and Distributed Computing (PDC)—a core knowledge area essential to every computer scientist. By the end of the course, we should be able to:
Explain the fundamental limitations of serial computation in terms of performance, scalability, and efficiency.
Differentiate and classify various types of parallel computer architectures based on their organization, communication, and memory models.
Analyze and evaluate the performance of parallel systems using established performance metrics.
Design, implement, and test simple parallel programs that demonstrate efficient use of concurrency and synchronization.
*Note that 2013 was when ACM/IEEE started to recommend the inclusion of PDC to the Computer Science Curricula. This has been reaffirmed in the 2023 revision of the recommendations.
Parallel computing is a vast terrain of ideas, techniques, and systems working together toward faster and more efficient computation. Enumerated below are the general topics that we will explore throughout the semester. Each topic links to its own Topic Pack, which contains the learning objectives, guiding questions, topic outline, lecture handouts, and reference materials that will help us navigate this exciting computational frontier.
Read the introductory handout: Learning to Think in Parallel*
Why We Think Faster Together: The Story of Parallel Computing
Overview and Motivation
Connected Minds, Connected Machines: How Parallel Systems Talk and Think
Parallel Programming Platforms and architectures
Speaking the Same Language: How Parallel Programs Coordinate and Communicate
Parallel Programming Models and Paradigms
Chasing Speed: How We Measure, Break, and Redefine Performance
Performance Metrics and Scalability
Breaking Problems Apart: The Art of Designing Parallel Algorithms
Principles of Parallel Algorithm Design: Foundations
Counting the Cost of Speed: Why Every Second (and Processor) Matters
Principles of Parallel Algorithm Design: Performance and Cost
When Processors Talk: The Hidden Conversations of Parallel Programs
Basic Communication Operations
Making It All Work Together: The Midterm Integration of Parallel Thinking
Midterm Integration
Reading the Future: How We Predict Parallel Performance
Analytical Modeling of Parallel Platforms: Introduction
Reading the Numbers: How We Test If Our Models Tell the Truth
Analytical Modeling of Parallel Platforms: Performance Evaluation
Sorting in Sync: How We Teach Computers to Agree on Order
Parallel Sorting Algorithms: Foundations
Beyond the Basics: How Smart Sorting Scales in Parallel Worlds
Advanced Parallel Sorting Algorithms
When Equations Meet Experience: The Art of Parallel Wisdom
Integrating Analytical and Algorithmic Insights
The Big Picture: How It All Connects in Parallel Thinking
Course Integration
Read the concluding handout: When We Think Together*
Note: Links marked with an asterisk (*) lead to materials accessible only to members of the University community. Please log in with your official University account to view them.
Every offering of CMSC 180 leaves behind a distinct trace—a record of how we have continually refined the art and science of teaching parallel computing. Listed below are the course guides (available only to UP constituents) used since Academic Year (AY) 2022–2023, each reflecting our evolving strategies in instruction, assessment, and engagement. Course guides from earlier years are currently being curated and will be made available soon, offering future learners a glimpse into how the discipline—and our collective understanding of it—has grown over time.
AY 2022-2023
AY 2023-2024
AY 2024-2025
Second Semester (coming soon)
AY 2025-2026
First Semester (on-going)
Second Semester (coming soon)
As we work through CMSC 180, we’re not just learning how to write parallel code—we’re cultivating a mindset of scale, coordination, and efficiency. We will discover how algorithms can be transformed to exploit multiple processors, how design decisions impact performance, and how system architecture influences correctness. UPLB hopes that we carry this experience forward into CMSC 280 and beyond, where we will refine these skills into leadership in high performance computing.
Read more: CMSC 280: Parallel Processing
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing