Commonly, algorithm developers and mathematicians talks about algorithm notation. These notations are a way of describing the algorithms' performances. They are represented in its mathematical form. The Big O notation is also known as Landau's Notation.
Determine the Big O notation helps one to predict the overall algorithm performances before test-run or benchmark. It is in its mathematical form so one can just sum up the notations to get a full idea how one algorithm is performing. In this guide, we look through some of the common notations.
This means that the algorithm is always performing at constant output regardless the size of the input. The smaller/bigger the input, the output performance is the same.
A code example in Go would be:
func CheckHeader(list *[]byte) bool {
return *list[0] == 0x02
}
Notice that regardless the size of the list, the processing always check the first byte and exit immediately after the condition checking.
This means that the algorithm is performing linearly based on the size of the input. The large the size, the longer it takes to produce an output. This is commonly seen when there is a loop inside a function.
A code example in Go would be:
func ZeroingList(list *[]byte) {
for i, _ := range *list {
(*list)[i] = 0x00
}
}
Notice that the function only ends when the list is completed and the speed of the execution is depending on the length of the list. Hence, the larger the list, the longer it takes to zero all elements in the list.
This means that the algorithm is performing proportionally to the square of the input data quantity. The larger the data input, the output takes even longer time (multiple of the data input quantity, as in N * M ) to process. This is commonly seen when a loop is having an internal loop inside it.
A code example in Go would be:
func StringFragments(list *[]byte) *[]byte {
out := []byte{}
for _ , item := range *list {
for j, _ := range item {
out = append(out, byte(item[j:j+2])
}
}
}
Notice that for each item in list (N), you need to process each items' character based on the item length (M). Hence, it takes N*M times to complete the entire function.
If the internal loop is checking against the original item in list (M = N), you got N * N which is essentially N^2. Hence, the deeper the loop, it takes even longer time (e.g. N^3, N^4, ...)
This means that the algorithm is taking exponential time to process with each addition to the input. The growth curve starts small and flat and then rises to the infinity. This type of function is best seen in Fibonacci types of recursion:
A code example in Go would be:
func Fibonacci(int number) int {
if number <= 1 {
return number
return Fibonacci(number - 2) + Fibonacci(number - 1)
}
Notice that the recursive function is calling itself with an decrements of input, twice until the final point (hitting number <= 1) before calculating all the sum number. This takes huge amount of time if the first number is large but very fast when the number is small, especially when the first input is 1.
This means that the algorithm is taking logarithmic time to process the data input. The smaller the input, the longer it takes the time for completion but the larger the input, the faster it processes the output. This type of function is best seen in Big Data functions like binary search in a sorted list.
A code example in Go would be:
func BinarySearch(needle int, haystack []int) bool {
low := 0
high := len(haystack) - 1
for low <= high{
median := (low + high) / 2
if haystack[median] < needle {
low = median + 1
} else {
high = median - 1
}
}
if low == len(haystack) || haystack[low] != needle {
return false
}
return true
}
Notice that the larger the haystack, the more value BinarySearch brought in by keep performing a median jump until we got the final output. The smaller the haystack, the more costly BinarySearch introduces since a simple O(n) loop from start to end is performing better.
That's all about Big O notations.