Home

›

Articles

›

Understanding Time Complexity, With Examples

# Understanding Time Complexity, With Examples

## Key Points

• Time complexity measures algorithm performance based on input size.
• Factors affecting time complexity include input size, operations performed, nested loops/recursion, and hardware.
• Common time complexity types include O(1), O(n), O(log n), O(n log n), O(n^2), O(n^3), O(2^n), and O(n!).
• Lower time complexity means shorter run-time and better scalability with larger datasets.
• Understanding time complexity is crucial for efficient algorithm design and optimization in various fields.

Understandably, one of the most important factors when picking an algorithm is its performance. If you’re using an inefficient algorithm, your performance will be subpar, no matter how optimized the rest of your code is. The main method for measuring algorithm performance is by using a concept known as time complexity. This concept can seem complicated at first, so we’re going to break down exactly what it is, how it’s represented, as well as examples of the common complexities.

## What is Time Complexity?

In simple terms, time complexity is a way to represent how the run-time of an algorithm increases with input size. Generally, input size has a large effect on an algorithm’s performance. For example, when searching an array for a specific element, the complexity is usually equal to O(n). This means that it depends linearly on n, which is the size of the input. As the number of elements increases, we have to search more elements to find the one we want. Therefore, the run-time of the algorithm is longer.

It should be said that, while asymptotic notation is a closely related concept, it’s not the same as time complexity. You can think of asymptotic notation as how we represent time complexity, and time complexity as the underlying concept. Considering our previous example, O(n) would be considered asymptotic notation, used to represent the time complexity of the search algorithm. We’ll get into the types of time complexity next, and how they’re represented using asymptotic notation.

## What Factors Affect Time Complexity?

As previously mentioned, one of the main factors affecting time complexity is the input size. However, this isn’t the only factor. The number of operations performed also greatly affects the complexity, as well as the presence of nested loops, recursion, and even the hardware we’re using.

## Common Cases of Time Complexity, With Examples

There are many different time complexities possible, each with its own unique notation. Big-O notation is used most commonly to represent the way complexity grows with input size. The notations you’ll likely come across most frequently include O(1), O(n), O(log n), O(n log n), O(n2), O(n3), O(2n), and O(n!). We’ll get into these next.

### Constant Time – O(1)

This complexity is known as constant time because it’s not dependent on the input size at all. Therefore, the run-time won’t change, no matter the input. Here are some examples of where O(1) is found:

• Checking whether an integer is odd or even.
• Checking whether an item is NULL.
• Accessing the element at a specific index in some sort of data structure.

For example, consider this code in Python:

``````def access_element(arr, index):
return arr[index]``````

Here, we’re trying to access the element of the “arr” array at the “index” index. Since the index won’t change even if we add more elements, the time needed here is constant.

### Linear time – O(n)

Another common complexity type is linear, or O(n). This increases proportionally, according to the input size. This will mostly be found in these situations:

• Obtaining the minimum or maximum value of an array.
• Performing an operation on each element.
• Printing all of the values within a list.

For example, we have this operation:

``````def sum_elements(arr):
total = 0
for element in arr:
total += element

Here, we’re summing up the values of the elements in an array. Since we’re doing the same operation on each, this complexity will increase as we have more elements in the list.

### Logarithmic time – O (log n)

When we say logarithmic time, we often mean that the speed divides in half with each operation. For example, the binary search algorithm. This works by taking a sorted list and a target element, then repeatedly dividing the search area by half, comparing the target with the middle element. Since the list is sorted, going on the size of the middle element, we can determine whether the target will be found in the left or right half. The algorithm then repeats these steps to find the target in logarithmic time. This can be used as follows in Python:

``````def binary_search(arr, target):
low = 0
high = len(arr) - 1

while low <= high:
mid = (low + high) // 2
if arr[mid] == target:
return mid
elif arr[mid] < target:
low = mid + 1
else:
high = mid - 1

return -1

arr = [2, 5, 8, 12, 16, 23, 38]
target = 16

index = binary_search(arr, target)
if index != -1:
print("Element", target, "found at index", index)
else:

In this case, we’re looking for the value 16, which is found at index 4. The time complexity of binary search is logarithmic but equal to 2log(n). Therefore, the target is found after 3 operations, since base-2 log(7) is roughly equal to 3. This can be seen in the image below.

### Linearithmic – O(n log n)

Similar to logarithmic, linearithmic has an extra dependency on input size. Merge sort is a good example of such an algorithm. This works by the divide-and-conquer process, similar to binary search, but performs a merging process once all subarrays have been fully divided. Consider the following code:

``````def merge_sort(arr):
if len(arr) <= 1:
return arr

mid = len(arr) // 2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])

return merge(left, right)

def merge(left, right):
merged = []
i = 0
j = 0

while i < len(left) and j < len(right):
if left[i] <= right[j]:
merged.append(left[i])
i += 1
else:
merged.append(right[j])
j += 1

merged.extend(left[i:])
merged.extend(right[j:])

return merged

arr = [8, 3, 1, 7, 4, 6, 2, 5]
sorted_arr = merge_sort(arr)
print(sorted_arr)``````

We have an array of 8 elements, which is split into smaller halves again and again until it can’t be split anymore. The arrays are then merged to give a sorted array, as seen in the image.

### Quadratic time – O (n2)

Quadratic time is a kind of polynomial time (O(nc), where the complexity increases exponentially with input size, by a factor of 2. We can consider the bubble sort algorithm an example of this, where adjacent elements are swapped over and over until they’re sorted. This is shown in the following code.

``````def bubble_sort(arr):
n = len(arr)

for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]

arr = [8, 3, 1, 7, 4, 6, 2, 5]
bubble_sort(arr)
print(arr)``````

In the worst case, we must swap every element, so the number of operations grows quadratically.

### Cubic time – O(n3)

Cubic time means that the complexity increases even faster than quadratic time. Therefore, we usually want to avoid greater polynomial complexities where we can. An example is the Floyd-Warshall algorithm, used for calculating the shortest path distances in a graph. This is shown in the code block next.

``````def floyd_warshall(graph):
n = len(graph)
distances = graph.copy()

for k in range(n):
for i in range(n):
for j in range(n):
distances[i][j] = min(distances[i][j], distances[i][k] + distances[k][j])

return distances

inf = float('inf')
graph = [
[0, inf, -2, inf],
[4, 0, 3, inf],
[inf, inf, 0, 2],
[inf, -1, inf, 0]
]

shortest_paths = floyd_warshall(graph)
for row in shortest_paths:
print(row)``````

Since we must use 3 nested for loops iterating over each vertex pair and the potential shorter path, the complexity is cubic. This is because each loop depends on the input size, and all iterations must be completed. It should be noted that Floyd-Warshall is actually quite a quick example of a cubic time algorithm, as long as it’s used for a fairly typical graph.

### Exponential time – O(2n)

Exponential here basically means the number of operations doubles as the input increases. We can illustrate this with the recursive Fibonacci process as follows:

``````def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)

result = fibonacci(5)
print(result)``````

This simple example calculates the Fibonacci number for the 5th index. This is 5, as we can see in the output. As such, exponential operations aren’t usable for large datasets, as the complexity increases very quickly.

### Factorial time – O(n!)

The factorial of a number is the multiplication of every whole integer that comes before it, plus itself. For example, the factorial of 3 is equal to 6, because 1 * 2 * 3 = 6. Since the growth of this complexity is huge, factorial algorithms are rarely used in everyday programming. However, generating all of the permutations in a string, or the possible combinations of the individual values, is an example of a process with factorial time. For example, consider this code:

``````def generate_permutations(elements):
if len(elements) == 1:
return [elements]

permutations = []
for i in range(len(elements)):
remaining = elements[:i] + elements[i+1:]
sub_permutations = generate_permutations(remaining)
for perm in sub_permutations:
permutations.append([elements[i]] + perm)

return permutations

elements = [1, 2, 3]
permutations = generate_permutations(elements)
for perm in permutations:
print(perm)``````

In this case, we’re trying to find all possible permutations of the elements [1, 2, 3]. Recursively, each element is picked to be the starting element, and permutations are calculated. We receive 6 permutations here because there are 3 elements. The results can be seen in the image.

## Implications and Applications of Time Complexity

Understanding the time complexity of an algorithm is essential to maximize the efficiency and scalability of your algorithm. As can be expected, those with lower complexities are more efficient and easier to scale. Being able to determine the rate-determining algorithm or the slowest algorithm, helps to identify the bottlenecks in our process and optimize these steps. As such, time complexity is important in designing algorithms and optimizing performance, and has applications in software engineering, database systems, machine learning, and computational sciences.

## Wrapping Up

Time complexity is closely related to asymptotic notation, which is used to represent it. The most common types of time complexity include O(1), O(n), O(nc), O(log n) and O(n log n). Choosing the correct algorithm for your needs, as well as optimizing its performance, is crucial in making your code efficient and scalable. Understanding how time complexity is affected will help you design programs with better performance, and reduce the time needed to carry out your operations on large volumes of data.

What is time complexity?

Time complexity is a measure of the time an algorithm takes, as a function of input size. As such, it’s an important indicator of performance.

What's the difference between time complexity and asymptotic notation?

Asymptotic notation is used to represent time complexity, which is the underlying concept. For example, big-O notation is commonly used to represent the best and worst cases of time complexity, and is considered asymptotic notation.

What are the common types of time complexity?

The most common types are O(1), O(n), O(nc), O(log n) and O(n log n).

What are the implications of time complexity?

Time complexity is important in evaluating an algorithm’s performance, optimizing it and designing efficient algorithms from scratch. From software engineering and network routing to database systems and machine learning, understanding time complexity has important implications virtually anywhere algorithms are used.

Is a lower or higher time complexity better?

A lower complexity is preferable, since this means the run-time of the algorithm is shorter. These types of algorithms will scale better with larger datasets.

What is space complexity?

Space complexity refers to the additional memory storage required by an algorithm during its operation. While this doesn’t affect the time complexity, it can affect overall performance, if memory is constrained. Therefore, it’s another important concept to be aware of when optimizing a program.

#### Duncan Dodsworth, Author for History-Computer

Duncan Dodsworth is a writer at History Computer, primarly producing content about personal tech, computers or gaming. Duncan has been writing about tech for over 5 years since he majored in Chemistry in 2016. A resident of London, England, Duncan likes listening to music, playing around with gadgets and reading books half as often as he says he does.