Complexity and Tractability is about how hard or easy it is for computers to solve problems, especially as the size of the problem increases.
Tractable problems: Can be solved in a "reasonable" time (polynomial time, or P class).
Intractable problems: Take too long to solve as they grow (e.g., exponential time).
NP problems: Solutions are easy to check but hard to find.
NP-complete: Problems that are the hardest in NP—if we solve one quickly, we can solve all NP problems quickly.
Brute-force algorithms: Try all possibilities (often slow).
Greedy or heuristic algorithms: Give good-enough answers quickly.
Big-O notation: Measures how time/space grows with input size.
Scheduling tasks (e.g., timetables, delivery routes)
Cryptography
Puzzle-solving (like Sudoku)
Pathfinding in maps or games
Some important problems can't be solved quickly (yet).
Approximations, heuristics, or limits (like timeouts) are used to deal with this.
If P = NP is solved, it would change everything (especially in security).
Algorithms help with efficiency but may limit options or fairness (e.g., route planning, resource allocation).
Debate over using “good enough” solutions vs. “perfect” ones.
Insights into how humans tackle similar complex problems differently.
Dr Tim Bell ran a webinar at the start of 2025 to help teachers understand Computer. The recording of this can be found on Youtube L3 External - Complexity and Tractability Webinar
This online webinar looks at what the general topic is about, and some specific learning activities that you can use with your students to engage them with the key concepts in the topic.
First you need to understand what Complexity & Tractability entails.
The GOAT: The CSField Guide: Complexity & Tractability Watch the video and read through the material. If you learn this stuff inside out and you'll smash it.
Understanding the difference between polynomial and non-polynomial time is essential when analysing how efficient or tractable an algorithm is.
An algorithm runs in polynomial time if its running time can be expressed as a power of the input size n, like:
O(n)
O(n²)
Efficient / Tractable: Generally considered practical and solvable in reasonable time.
Grows predictably as input increases.
Examples:
Linear Search – O(n)
Bubble Sort – O(n²)
Dijkstra’s Algorithm – O(n²)
An algorithm runs in non-polynomial time if its time complexity grows faster than any polynomial function of the input. Common types:
Exponential: O(2ⁿ)
Factorial: O(n!)
Inefficient / Intractable for large inputs
Time explodes rapidly as n increases
Often used in brute-force solutions, puzzles, optimisation problems
Travelling Salesman Problem (Brute Force) – O(n!)
Subset Sum (Brute Force) – O(2ⁿ)
Hamiltonian Path Search – O(2ⁿ * n)
Feature Polynomial Time Non-Polynomial Time
Notation O(n), O(n²), O(n³) O(2ⁿ), O(n!), O(n^n)
Efficiency Efficient (scales well) Inefficient (scales poorly)
Used in Sorting, searching, graphs Puzzles, encryption, AI games
Examples Merge sort, BFS TSP, brute-force knapsack
Tractability Tractable (solvable) Intractable for large n
Big O notation is a way to describe the upper bound of an algorithm's time or space complexity in terms of input size. It essentially tells you how much the algorithm's resource usage (time or memory) grows as the input size grows.
https://discusschool.com/uploads/default/original/1X/5c70f54a8b2023459516f174b741417323329c9f.jpeg
1. Drop Constants
Keep: O(n)
Drop: O(3n) → Simplifies to: O(n)
Reason: As n grows, multiplying by a constant doesn't change the growth rate.
2. Keep the Dominant Term
Keep: O(n²)
Drop: O(n² + n + 1) → Simplifies to: O(n²)
Reason: n² dominates n and constants as input grows.
3. Different Growth Rates Have Different Classes
Expression Simplified Big O Growth Rate Type
O(n + log n) O(n) Linear
O(n² + n log n) O(n²) Quadratic
O(2n + 1000) O(n) Linear
O(n * 2ⁿ) O(n·2ⁿ) Exponential
Unsimplified Expression Simplified Big O
O(5n + 20)
O(n² + n + 1)
O(6n log n + n²)
O(n + log n + 100)
O(n! + 2ⁿ + n³)
Unsimplified Expression Simplified Big O
O(5n + 20) O(n)
O(n² + n + 1) O(n²)
O(6n log n + n²) O(n²)
O(n + log n + 100) O(n)
O(n! + 2ⁿ + n³) O(n!)
Time complexity refers to how the runtime of an algorithm increases as the size of the input increases.
It is a way of analysing the efficiency of an algorithm in terms of the number of operations it performs.
Example:
An algorithm with O(N) time complexity will take twice as long if the input size doubles.
Common Notations:
O(1) – Constant time (independent of input size)
O(n) – Linear time
O(log n) – Logarithmic time
O(n²) – Quadratic time
O(2ⁿ) – Exponential time
O(n!) – Factorial time
Space complexity measures the amount of memory an algorithm uses as the input size grows.
This includes memory used by variables, data structures, call stacks, and function calls.
Example:
An algorithm that creates an array proportional to the input size has O(n) space complexity.
Includes:
Input storage (if copied)
Auxiliary data structures (like stacks, queues, hash maps)
Recursive call memory (call stack)
Below is a collection of code examples that demonstrate a range of common Big O notations, including O(1), O(log n), O(n!), O(n²), and O(2ⁿ).
For each code snippet:
Carefully read and understand what the algorithm is doing.
Copy the code and paste it into the Big O Complexity Calculator provided below.
Click "Calculate" to view both the Time and Space Complexity of the algorithm.
This exercise will help you build your understanding of how different algorithms scale with input size and how their performance is measured.
Code Example (O(1) Complexity):
def get_first_element(arr):
if arr:
print(arr[0])
else:
print("Array is empty.")
# Example usage
data = [10, 20, 30, 40]
get_first_element(data)
Code Example (O(1) Complexity): Large data set
def get_first_element(arr):
for element in arr:
print(element)
# Example usage
large_data = list(range(1000000))
get_first_element(large_data)
Code Example O(N²) Quadratic Time Complexity):
def bubble_sort(data):
for _ in range(len(data)): # O(n)
for i in range(len(data) - 1): # nested O(n)
if data[i] > data[i + 1]:
data[i], data[i + 1] = data[i + 1], data[i]
return data
Code Example O(log N) Logarithmic Time Complexity):
def binary_search(arr, target):
low = 0
high = len(arr) - 1
while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid # Target found
elif arr[mid] < target:
low = mid + 1 # Search right half
else:
high = mid - 1 # Search left half
return -1 # Target not found
# Example use
sorted_list = [1, 3, 5, 7, 9, 11, 13]
result = binary_search(sorted_list, 9)
print("Index:", result)
Code Example O(2^N) Exponential Time Complexity):
def nth_fibonacci(n: int) -> int:
if n in [1, 2]:
return 1
return nth_fibonacci(n - 1) + nth_fibonacci(n - 2)
Code Example O(N!) Factorial Time Complexity):
import itertools
# Distance matrix for 4 cities
distances = [
[0, 10, 15, 20],
[10, 0, 35, 25],
[15, 35, 0, 30],
[20, 25, 30, 0]
]
def calculate_total_distance(path, distance_matrix):
total = 0
for i in range(len(path) - 1):
total += distance_matrix[path[i]][path[i + 1]]
total += distance_matrix[path[-1]][path[0]] # Return to start
return total
def tsp_brute_force(distance_matrix):
n = len(distance_matrix)
cities = list(range(n))
min_distance = float('inf')
best_path = []
for perm in itertools.permutations(cities[1:]): # Fix the first city to reduce duplicate cycles
current_path = [0] + list(perm)
current_distance = calculate_total_distance(current_path, distance_matrix)
if current_distance < min_distance:
min_distance = current_distance
best_path = current_path
return best_path, min_distance
path, distance = tsp_brute_force(distances)
print("Best Path:", path)
print("Shortest Distance:", distance)
Code Example O(log N) Logarithmic Time Complexity):
def quicksort(arr):
if len(arr) <= 1:
return arr # Base case
pivot = arr[0]
left = [x for x in arr[1:] if x < pivot] # Elements less than pivot
right = [x for x in arr[1:] if x >= pivot] # Elements greater or equal
return quicksort(left) + [pivot] + quicksort(right)
# Example use
array = [9, 3, 7, 1, 6, 2, 8]
sorted_array = quicksort(array)
print(sorted_array)
When analysing an algorithm’s performance, we use Big O notation to describe how its running time or space requirements grow as input size increases. We often consider three different scenarios:
Definition: The scenario where the algorithm performs the minimum number of operations.
Definition: The scenario where the algorithm takes the maximum possible time.
Definition: The expected performance over all possible inputs of a given size.
BEST CASE
If the item you're searching for is the first element, the algorithm finishes in → O(1) time.
WORST CASE
If the item is at the end of the list or not present → O(n) time.
AVERAGE CASE
On average, the item will be found halfway through the list → O(n/2), simplified to O(n).
BEST CASE
Already sorted; only one pass needed (no swaps) → O(n) time
WORST CASE
Reversed list; requires maximum comparisons/swaps → O(n²) time.
AVERAGE CASE
Random data; many passes and comparisons/swaps → O(n²) time.
BEST CASE
All permutations must be generated → O(n!) time.
WORST CASE
No difference—algorithm always performs factorial steps → O(n!).
AVERAGE CASE
Still generates every permutation → O(n!).
BEST CASE
Still compares every pair (no early exit) → O(n²) time.
WORST CASE
Still checks all remaining elements for each current index → O(n²) time.
AVERAGE CASE
Always performs the same number of comparisons → O(n²) time.
In computer science, some problems are too complex or time-consuming to solve exactly within a reasonable time — especially those with non-polynomial time complexity like O(2ⁿ) or O(n!). This is where approximation and heuristic methods come in.
An approximation algorithm gives a solution that is close to optimal, but not always exact.
Used when:
An exact solution is too slow
You need a "good enough" answer quickly
Example:
Travelling Salesman Problem (TSP) – Finding the shortest path visiting every city once is O(n!). An approximation algorithm like Nearest Neighbour can give a route that’s close to the best, in polynomial time.
A heuristic is a rule-of-thumb or strategy that guides problem-solving without guaranteeing the best solution.
Heuristics often:
Use simplifying assumptions
Are fast and adaptive
Work well in real-world or NP-hard problems
Example:
Bin Packing Problem – A heuristic like First-Fit Decreasing (FFD) places items in bins efficiently, but not perfectly.
A* pathfinding algorithm – uses heuristics to estimate distance to the goal and guide the search efficiently.
How it works:
Start at any location.
Go to the nearest unvisited location.
Repeat until all locations are visited.
Return to the starting point (if needed).
Why it works:
It’s fast and easy — always picks the shortest next leg.
Limitations:
Can make locally good choices that are globally inefficient.
Local Delivery Routing (Courier, Food, or Postal Services)
Hotel Housekeeping or Maintenance Rounds
Warehouse Order Picking
Tourist Navigation or Sightseeing Tours
Calculate the value-to-weight ratio for each item.
Sort items in descending order of value-to-weight ratio.
Starting with the highest ratio, add as much of each item as possible to the knapsack.
Repeat until the knapsack is full.
It always takes the most valuable “bang for your buck” at each step.
Simple and very fast: time complexity is O(n log n) due to sorting.
Only works optimally for the fractional knapsack problem (where you can take parts of items).
For the 0/1 knapsack (where items must be taken whole or not at all), this approach can be very inefficient and miss the best combination.
Loading Cargo (eg Airlines)
Online Shopping Discounts
Resource Allocation in Cloud Computing
Emergency Supply Distribution
A* is a pathfinding and graph traversal algorithm that uses heuristics to efficiently find the shortest path from a start node to a goal node. It is widely used in artificial intelligence, games, GPS navigation, and robotics.
A* balances two things:
Cost from the start to the current node — g(n)
Estimated cost from the current node to the goal — h(n) (heuristic)
It evaluates nodes based on:
f(n)=g(n)+h(n)f(n) = g(n) + h(n) f(n)=g(n)+h(n)
Where:
f(n) is the total estimated cost of the cheapest solution through node n
g(n) is the cost so far
h(n) is the heuristic estimate of the cost to the goal
It prioritises paths that appear more promising.
If the heuristic is admissible (never overestimates), A* will always find the shortest path.
It is much faster than brute-force algorithms like Dijkstra’s for many practical problems.
Can be slow for very large or dense graphs if the heuristic is not well-designed.
Needs a good heuristic function to be efficient.
Can use a lot of memory, as it stores many nodes in memory during execution.
PS Navigation Finding shortest driving routes
Games (e.g. AI agents) Enemy or NPC pathfinding
Robotics Autonomous robot movement
Logistics Route planning in warehouses
Simulate artificial ants moving through the problem space (e.g. cities in a route).
Each ant builds a path based on two things:
Pheromone trails (previously good paths)
Heuristic values (e.g. distance between cities)
After all ants complete a route:
Pheromone is deposited on shorter/better routes.
Pheromone evaporates over time (to avoid over-committing to early bad paths).
In the next iteration, ants are more likely to follow stronger pheromone trails — encouraging collaborative learning.
This process repeats for many cycles, gradually improving the overall solution.
Mimics how real ants find efficient paths between the nest and food.
Encourages exploration early, but shifts to exploitation of good solutions over time.
Works well for complex optimisation problems like TSP and scheduling.
Slow to converge compared to some heuristics.
May get stuck in local optima if pheromone trails become too dominant early.
Requires fine-tuning of many parameters (evaporation rate, ant count, influence of pheromone vs distance, etc.).
Vehicle routing & logistics
Network routing
Scheduling in manufacturing
Pathfinding in robotics and games
The Mobile Surgical Unit (Te Waka Hauora) is a modern, fully equipped operating theatre, designed and built to be used for a wide range of day surgeries. The unit travels throughout New Zealand. It can provide low-risk elective day surgeries in rural communities that do not have local access to a full operating theatre. In 2021–2022, the service operated in 28 locations across the country.
The Mobile Surgical Unit (Te Waka Hauora) proposes to increase its destinations from 28 areas to travel to all small urban areas with populations of 1,000 to 9,999. There are 152 small urban areas in
New Zealand.
The unit wants to visit each community only once, and to minimise overall travel distance, time, and associated costs.
Someone suggests using a computer to brute-force calculate the optimal route.
(f) Using your understanding of complexity and tractability, explain and discuss:
· what type of problem this is
· whether it is tractable or not
· how the problem could potentially be solved
· whether it is possible to find the optimal solution using the brute-force approach.
Above resource from 2024 NZQA L3 Exam
1.
What is the time complexity of a brute-force algorithm for solving the Travelling Salesman Problem (TSP) with n destinations?
a) O(n)
b) O(n²)
c) O(2ⁿ)
d) O(n!)
2.
Which of the following best describes the space complexity of storing all possible paths in a brute-force TSP solution?
a) O(n)
b) O(log n)
c) O(n!)
d) O(n²)
3.
The Mobile Surgical Unit wants to visit 152 towns only once, finding the most efficient route. Why is this considered an NP-hard problem?
a) Because the solution is always the same regardless of input
b) Because the number of possibilities grows quadratically
c) Because it has no solution
d) Because the number of possible solutions grows faster than any polynomial and no known algorithm can solve all cases in polynomial time
4.
Explain what is meant by the P vs NP question, and how it relates to the TSP.
5.
If P = NP were proven true, what would this mean for solving the Mobile Surgical Unit’s routing problem?
6.
Describe the best-case scenario for a heuristic algorithm solving the Mobile Surgical Unit route problem.
7.
What would the worst-case scenario look like for a brute-force solution?
1.
What is the time complexity of a brute-force algorithm for solving the Travelling Salesman Problem (TSP) with n destinations?
a) O(n)
b) O(n²)
c) O(2ⁿ)
d) O(n!)
2.
Which of the following best describes the space complexity of storing all possible paths in a brute-force TSP solution?
a) O(n)
b) O(log n)
c) O(n!)
d) O(n²)
3.
The Mobile Surgical Unit wants to visit 152 towns only once, finding the most efficient route. Why is this considered an NP-hard problem?
a) Because the solution is always the same regardless of input
b) Because the number of possibilities grows quadratically
c) Because it has no solution
d) Because the number of possible solutions grows faster than any polynomial and no known algorithm can solve all cases in polynomial time
4.
Explain what is meant by the P vs NP question, and how it relates to the TSP.
Sample answer:
The P vs NP question asks whether every problem that can be verified in polynomial time (NP) can also be solved in polynomial time (P).
The TSP is in NP, because if someone gives you a proposed route, you can quickly check if it meets the length requirement. However, we don’t know how to find that route in polynomial time. If P ≠ NP, then no efficient algorithm exists to solve TSP for all cases.
5.
If P = NP were proven true, what would this mean for solving the Mobile Surgical Unit’s routing problem?
Sample answer:
It would mean there exists a polynomial-time algorithm that can find the optimal route for the TSP. The Mobile Surgical Unit could then compute the shortest route to all 152 towns efficiently. However, most experts believe that P ≠ NP, so heuristic or approximation methods are more realistic.
6.
Describe the best-case scenario for a heuristic algorithm solving the Mobile Surgical Unit route problem.
Sample answer:
The best case is when the heuristic (e.g. Nearest Neighbour) accidentally finds the optimal route or a very close approximation with minimal computation. This can happen if the town layout is simple and well-aligned with the heuristic's assumptions.
(Without checking all n! permutations (where n = 152), we have no guarantee that the route found is the shortest. In real-world scenarios, this is usually impractical due to the factorial time complexity of brute-force solutions, so we settle for "good enough" rather than proven optimal. )
7.
What would the worst-case scenario look like for a brute-force solution?
Sample answer:
The worst case is when the number of towns is large (like 152), and the algorithm has to check every possible permutation of the route—O(n!) combinations. This would take billions of years, making it completely infeasible.
Humans deal with complex problems all the time — from planning travel to scheduling tasks — and we often do so without formal algorithms. Instead of brute-force solutions, we rely on heuristics, intuition, and experience to make decisions quickly and effectively.
This mirrors what computers do when faced with intractable problems: when exact methods (e.g. brute force with O(n!)) are too slow, we switch to approximations and heuristics.
Comparing Perspectives on Complexity
Perspective Approach Strengths Limitations
Human (Intuitive) Uses judgment, past experiences Fast, flexible, includes emotion/context Prone to bias, hard to replicate or verify
Heuristic Algorithm Uses rules of thumb or shortcuts Efficient, scalable, consistent No guarantee of optimal solution
Exact Algorithm Explores all possible options Always finds the best answer Often too slow or resource-heavy
Scenario:
A courier company like NZ Post or CourierPost needs to plan delivery routes for vans across a city or rural region. Each van must deliver parcels to dozens of addresses efficiently in a single day.
What could a human approach be?
What are the benefits and limitations of this approach
What could a Computer Science approach be?
What type of problem is this?
What is the Big O notation?
Would this work?
What are the benefits and limitations of this approach
What could a Heuristic approach be?
What heuristics could be used?
Explain how the heuristic you considered actually works for this scenario.
The driver might rely on familiar routes, local traffic knowledge, or just “what feels right.”
They might choose to:
Avoid roads they know are busy.
Cluster deliveries by suburb.
Prioritise regular customers.
Limitation: The route may not be the shortest or most fuel-efficient, but it’s fast to decide, and “good enough.”
A software system could try to optimally plan the shortest possible route using exact algorithms — essentially solving a version of the Travelling Salesman Problem (TSP).
However, for 50+ delivery points, the number of possible routes is enormous: O(n!).
Brute-force becomes intractable – even supercomputers would take too long.
Instead, the system uses a heuristic like:
Nearest Neighbour
Start at the courier depot or the first delivery location.
From the current location, find the closest unvisited delivery address.
Travel to that address and mark it as delivered.
Repeat step 2 until all deliveries are completed.
Optionally, return to the depot (if the route is required to end at the start point)
These give good, fast solutions that drivers can follow in real-time, balancing efficiency and practicality.
DTTA will provide one at the start of Term 3. This will be advertised on the DTTA Mobilse forum.
This the DTTA Derived Grade Exam Resources for 91908 provided in 2024
Your teacher will provide this. Do your best and remember to give specific examples!