Connected Minds, Connected Machines:
How Parallel Systems Talk and Think
Connected Minds, Connected Machines:
How Parallel Systems Talk and Think
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 02: Connected Minds, Connected Machines
In this topic, we explore how computers connect, share work, and cooperate as teams of processors. We see that the design of a parallel system—how its processors talk, how its memory is organized, and how its network is shaped—affects how fast and how efficiently it can solve big problems. We study shared-memory, distributed-memory, and hybrid architectures, and we learn how interconnection networks make all of them possible.
By the end of this topic, we will understand not only how these systems are built, but why each design exists. Just like cities need different kinds of roads for traffic flow, parallel computers need the right kind of communication paths to move data efficiently.
Describe the organization of shared-memory, distributed-memory, and hybrid systems and how they support parallel computing.
Compare different network topologies—such as mesh, torus, hypercube, and Fat Tree—and explain how each affects communication cost.
Analyze how architectural design choices influence scalability and performance in parallel systems.
How do different architectures help us run programs faster or at larger scales?
What trade-offs do we face between simplicity, speed, and scalability?
How does network topology shape how data moves between processors?
When Processors Share or Don’t Share
Shared-Memory Systems: One Big Table
Distributed-Memory Systems: Many Small Houses
Hybrid Systems: The Best of Both Worlds
How Processors Connect
Static and Dynamic Networks
Topologies: From Bus to Fat Tree
Performance: What Makes It Fast or Slow
Latency, Bandwidth, and Communication Cost
Scalability and Topology-Aware Design
Current Lecture Handout
Connected Minds, Connected Machines: How Parallel Systems Talk and Think, rev 2023*
Note: Links marked with an asterisk (*) lead to materials accessible only to members of the University community. Please log in with your official University account to view them.
The semester at a glance:
Caldwell, J., et al. (2022). Network-aware scheduling for exascale computing. Journal of Parallel and Distributed Computing, 165, 100–115. https://doi.org/10.1016/j.jpdc.2022.01.003
Choquette, J., et al. (2021). NVIDIA A100 Tensor Core GPU architecture. IEEE Micro, 41(2), 46–55. https://doi.org/10.1109/MM.2021.3051625
Kim, J., Dally, W. J., Abts, D., & Michelogiannakis, G. (2018). Technology-driven, highly-scalable Dragonfly topology. ACM SIGARCH Computer Architecture News, 46(2), 1–12.
Kurth, T., et al. (2018). Exascale deep learning for climate analytics. Proceedings of the International Conference for High Performance Computing (SC18). https://doi.org/10.1109/SC.2018.00077
Silva, L., Martins, R., & Sousa, L. (2019). Evaluating performance of shared-memory servers for parallel workloads. Future Generation Computer Systems, 97, 377–389. https://doi.org/10.1016/j.future.2019.02.036
Top500. (2021). Fugaku retains top spot on TOP500 list of world’s supercomputers. https://www.top500.org/lists/2021/06/
Zhu, J., et al. (2017). Reducing uncertainty in hurricane intensity forecasts using high-resolution ensemble simulations. Bulletin of the American Meteorological Society, 98(3), 453–469.
Access Note: Published research articles and books are linked to their respective sources. Some materials are freely accessible within the University network or when logged in with official University credentials. Others will be provided to enrolled students through the class learning management system (LMS).
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 02: Connected Minds, Connected Machines