Welcome to HPC@UPLB, where we explore the art and science of scaling computation to solve the world’s most demanding problems with many minds and many machines computing as one.
This site brings together our instructional materials, research outputs, and community initiatives that advance high performance computing (HPC) at the University of the Philippines Los Baños (UPLB).
HPC allows us to solve problems that are too large, too complex, or too time-consuming for a single processor, and we solve them using parallel, distributed, and/or clustered systems.
Here at UPLB, we apply this principle across teaching and research — transforming the way we compute, analyze, and innovate.
In our instructional programs, we teach students how to design, write, implement, run, and debug correct parallel algorithms.
We start with a simple question: What makes a parallel algorithm correct? From there, we explore how multiple processors can work together efficiently without compromising accuracy or synchronization.
At the undergraduate level, we offer CMSC 180: Introduction to Parallel Computing, where students learn the principles of speedup, scalability, and load balancing.
We begin with real-world examples that demonstrate why sequential thinking has limits, then move on to implementing solutions using shared-memory and message-passing models.
In the latter half of the semester, we focus on the Message Passing Interface (MPI) — the foundation of distributed parallel programming used in modern scientific and engineering applications.
At the graduate level, we offer CMSC 280: Parallel Processing, which extends these principles to large-scale scientific computing and heterogeneous architectures.
Students explore advanced concepts such as parallel performance tuning, scheduling, communication overhead reduction, and parallel debugging.
Hands-on projects involve writing efficient parallel programs for high-performance systems, where computation meets experimentation.
Through these two courses, we train future scientists and engineers to think beyond the single core — to think in parallel.
Our research builds on the same foundation we teach — applying parallel thinking to real-world challenges.
We investigate how to schedule parallel tasks efficiently, develop scientific computing applications, and design parallel systems that power machine learning and data analytics.
Parallel Task Scheduling:
We design algorithms that allocate computing tasks across processors efficiently, reducing idle time and improving overall throughput.
Our scheduling models are tested on multi-core clusters and heterogeneous systems to find the best trade-offs between computation and communication.
Scientific Computing Applications:
We write parallel codes that accelerate simulations in physics, chemistry, and environmental modeling — allowing researchers to analyze data faster and at higher resolutions.
These projects bridge computer science and applied sciences, demonstrating how parallelism drives discovery.
Parallel Systems for Machine Learning:
We develop both hardware and software systems that support distributed training of machine learning models.
By parallelizing model training and inference, we enable AI systems to learn faster and handle larger datasets efficiently.
Our collective research efforts contribute to UPLB’s growing expertise in high-performance and scientific computing, strengthening its role as a national leader in computational research.
We see instruction and research not as separate tracks, but as interconnected layers of the same system.
Our classrooms serve as incubators for ideas that later evolve into research projects, while our laboratories provide real data and problems that make instruction meaningful.
Students who start with course exercises in MPI often move on to designing parallel schedulers, optimizing neural networks, or building distributed systems for their theses and projects.
Through Parallel Computing @ UPLB, we cultivate a culture of shared learning — where teaching inspires research, and research feeds back into teaching.
Together, we prepare students and researchers to think in parallel, compute at scale, and collaborate effectively in solving the grand computational challenges of our time.
As we continue to teach, research, and innovate together, we strengthen UPLB’s reputation as a center of excellence in parallel and high-performance computing.
This site is not only a collection of materials — it is a living record of how our ideas evolve, collaborate, and scale.
Just as parallel processors share workloads to achieve a common goal, we share knowledge and effort to accelerate progress.
Through unity in diversity — both human and computational — we build the future in parallel.