Why We Think Faster Together:
The Story of Parallel Computing
Why We Think Faster Together:
The Story of Parallel Computing
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 01: Why We Think Faster Together
We begin our journey into the world of parallel computing. We learn why it became necessary, how it grew from the limits of single processors, and how different ways of doing things at the same time make our computers stronger. We also see how parallel thinking powers science, business, and even everyday digital life.
Explain why we need parallel computing and how it helps us go beyond the limits of single-core processors.
Identify different levels and forms of parallelism and see how they appear in real examples.
Describe the main ideas of Flynn’s taxonomy and use it to classify computer architectures.
Why can’t we just keep making single processors faster forever?
How do we see parallelism in real applications like AI or climate research?
What is Flynn’s taxonomy, and why is it still useful today?
The Need for Speed: How We Got Here
When One Worker Wasn’t Enough
How We Share the Work
Parallelism vs. Concurrency: Similar but Not the Same
The Big Difference
Speed, Scale, and Limits
The Many Levels of Working Together
From Bits to Big Tasks
Flynn’s Taxonomy: The Four Archetypes
Current Lecture Handout
Why We Think Faster Together: The Story of Parallel Computing, rev 2023*
Note: Links marked with an asterisk (*) lead to materials accessible only to members of the University community. Please log in with your official University account to view them.
The semester at a glance:
Afgan, E., et al. (2018). The Galaxy platform for accessible, reproducible, and collaborative biomedical analyses: 2018 update. Nucleic Acids Research, 46(W1), W537–W544. https://doi.org/10.1093/nar/gky379
Amdahl, G. M. (1967). Validity of the single processor approach to achieving large scale computing capabilities. Proceedings of the AFIPS Spring Joint Computer Conference, 30, 483–485. https://doi.org/10.1145/1465482.1465560
Brown, T. B., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
Flynn, M. J. (1972). Some computer organizations and their effectiveness. IEEE Transactions on Computers, C-21(9), 948–960. https://doi.org/10.1109/TC.1972.5009071
Moore, G. E. (1965, April 19). Cramming more components onto integrated circuits. Electronics, 38(8), 114–117.
MPI Forum. (1994). MPI: A message-passing interface standard. University of Tennessee.
OpenMP Architecture Review Board. (1997). OpenMP: A proposed industry standard API for shared memory programming in Fortran (Version 1.0).
Zhu, J., et al. (2017). Reducing uncertainty in hurricane intensity forecasts using high-resolution ensemble simulations. Bulletin of the American Meteorological Society, 98(3), 453–469.
Access Note: Published research articles and books are linked to their respective sources. Some materials are freely accessible within the University network or when logged in with official University credentials. Others will be provided to enrolled students through the class learning management system (LMS).
::: Home > Instruction > CMSC 180: Introduction to Parallel Computing > Topic 01: Why We Think Faster Together