Modern computing systems often perform multiple tasks at once to maximize efficiency and responsiveness. This is achieved through concurrency and parallelisation, two distinct but related concepts. Understanding them helps in designing efficient programs that use system resources effectively, improve performance, and remain responsive.
Thinking concurrently is a fundamental aspect of computational thinking that improves problem-solving by identifying tasks that can run in parallel. While concurrency increases efficiency, it also introduces complexities and trade-offs. Understanding how to apply concurrent thinking helps in software development, real-world systems, and everyday decision-making.
Learning Objectives
By the end of this topic, you will:
Understand the definitions of concurrency and parallelisation.
Recognize the benefits of using these techniques in computing.
Identify common challenges in concurrent and parallel systems.
Differentiate between time-slicing (concurrency) and simultaneous execution (parallelisation).
Explore examples of concurrency and parallelisation in real-world applications.
Key Terms
Concurrency – Multiple tasks make progress during overlapping time periods, often using time-slicing on a single core.
Parallelisation – Tasks are divided into sub-tasks and executed simultaneously across multiple CPU cores or processors.
Time-Slicing – A technique where the CPU rapidly switches between tasks, creating the illusion of simultaneous execution.
Thread – A lightweight process that runs independently but may share memory with other threads.
Deadlock – A situation where two or more processes are stuck indefinitely, waiting for each other to release resources.
Race Condition – A programming error that occurs when multiple threads access shared resources unpredictably, leading to incorrect behavior.
Context Switching – The process of saving and restoring the state of a CPU when switching between tasks.
Key Ideas
1. Concurrency: Overlapping Task Execution
Concurrency allows multiple tasks to make progress without necessarily running at the same exact time.
This is achieved on a single-core processor by rapidly switching between tasks using time-slicing.
Example: A web browser loading multiple tabs while allowing the user to scroll and type.
How it Works
The CPU scheduler allocates small time slots to different tasks.
Since switching is rapid, it gives the illusion of multitasking.
Concurrency improves responsiveness, especially in interactive applications.
2. Parallelisation: True Simultaneous Execution
Parallelisation involves splitting a large task into smaller parts, executed at the same time across multiple cores.
Requires multi-core processors or distributed systems.
Example: Processing a large dataset by splitting it into smaller chunks and running separate computations on each core.
How it Works
Each CPU core handles a portion of the workload independently.
Once all cores finish, results are combined for the final output.
Parallelisation significantly speeds up processing time.
3. Challenges of Concurrency and Parallelisation
Shared Resource Conflicts: Multiple processes accessing the same resource can cause data inconsistencies.
Example: Two users updating the same database record at the same time.
Deadlocks: If two tasks hold resources needed by each other, they may both be blocked indefinitely.
Example: A printer queue where two processes wait for each other to release access.
Race Conditions: If the order of execution affects the program’s outcome, it can lead to unpredictable results.
Example: Two processes incrementing a counter simultaneously without synchronization.
Overhead: Managing multiple threads or processes requires extra processing, such as context switching.
Example: A poorly designed parallel program might run slower due to the overhead of managing multiple threads.
4. Real-World Examples
Web Servers: Handle multiple user requests concurrently by switching between requests.
File Download Managers: Download multiple files at the same time using parallelisation.
Video Games: Use parallelisation to separate physics simulation, rendering, and AI.
Search Engines: Index web pages faster by distributing work across multiple servers.
Table compares Concurrency with Parallelisation.
Guided Revision Notes
Define concurrency and explain how it improves efficiency in computing.
Define parallelisation and describe how it differs from concurrency.
Explain time-slicing and how it creates the illusion of multitasking.
What is a thread, and how does it relate to concurrency and parallelisation?
Describe deadlock and give an example of how it can occur in computing systems.
What is a race condition, and why is it problematic in concurrent programming?
Explain context switching and its role in concurrency.
How does parallelisation improve performance in computing tasks? Provide an example.
What are some challenges associated with concurrency and parallelisation?
Identify and explain one real-world example of concurrency and one of parallelisation.
Comprehension Questions
What is the main reason modern computing systems use concurrency and parallelisation?
How does time-slicing enable concurrency on a single-core processor?
Why is parallelisation only possible on multi-core processors or distributed systems?
What is the key difference between concurrency and parallelisation in execution?
How does a web browser use concurrency to improve user experience?
What kind of computing tasks benefit the most from parallelisation?
Explain why shared resource conflicts occur in concurrent systems.
What happens in a deadlock situation, and how does it affect system performance?
Why can context switching introduce overhead in concurrent programs?
How do video games use parallelisation to manage multiple complex tasks at once?
Note: If additional material is needed, specify which areas require further explanation or examples.
Practical Activity: Simulating Concurrent Thinking in Python
Activity 1: Simulating Concurrent Execution
This program demonstrates how two tasks can run concurrently using Python's threading module. The CPU rapidly switches between tasks, giving the illusion of simultaneous execution.
import threading
import time
def task1():
for i in range(3):
print("Task 1 is running")
time.sleep(1)
def task2():
for i in range(3):
print("Task 2 is running")
time.sleep(1)
# Creating and starting threads
thread1 = threading.Thread(target=task1)
thread2 = threading.Thread(target=task2)
thread1.start()
thread2.start()
thread1.join()
thread2.join()
print("Both tasks completed")
Discussion:
Did the two tasks run truly simultaneously, or did the CPU switch between them?
What would happen if one task took significantly longer than the other?
How does concurrent thinking help organize program execution?
Activity 1: Simulating Concurrent Execution (Threading)
Real-World Example: Handling Multiple User Requests in a Web Server
A professional programmer working on a web server might use threading to handle multiple client requests concurrently.
Example Case: Imagine a web application where multiple users send requests at the same time. Instead of processing them one by one (which would slow down the response time), the server creates a new thread for each request, allowing multiple users to interact with the website simultaneously.
Explanation:
Threading allows the server to switch between user requests quickly, keeping the website responsive.
Since requests are often waiting for data (e.g., fetching from a database), the CPU can efficiently switch to handling another request while waiting.
Threading is useful when tasks involve waiting (like network calls or database queries) rather than heavy computation.
Activity 2: Parallel Processing Using Multiprocessing
This example truly runs tasks in parallel by using the multiprocessing module. Each process runs on a separate CPU core if available.
import multiprocessing
import time
def task1():
for i in range(3):
print("Task 1 is running")
time.sleep(1)
def task2():
for i in range(3):
print("Task 2 is running")
time.sleep(1)
if __name__ == "__main__":
process1 = multiprocessing.Process(target=task1)
process2 = multiprocessing.Process(target=task2)
process1.start()
process2.start()
process1.join()
process2.join()
print("Both processes completed")
Discussion:
Does this script execute the tasks truly simultaneously?
How does multiprocessing differ from threading?
What happens if we run this on a single-core processor?
Real-World Example: Video Processing in a Video Editing Application
A professional programmer developing a video editing or graphics rendering application might use multiprocessing to speed up tasks that require high computational power.
Example Case: Consider a video editing software where a user applies a filter to a long video. Instead of applying the filter to the entire video sequentially, the software splits the video into smaller chunks and processes them in parallel on multiple CPU cores.
Explanation:
Multiprocessing is ideal for tasks that require heavy computation, such as video rendering, AI processing, or large-scale simulations.
By splitting the workload across multiple CPU cores, the task completes much faster than if a single core handled the entire job.
Unlike threading, multiprocessing avoids shared memory issues because each process runs independently.
These Python activities help illustrate the principles of concurrent thinking by showing how multiple tasks can run at the same time. Whether using threading (concurrent execution) or multiprocessing (parallel execution), understanding how to structure tasks efficiently is an essential part of computational thinking.