Dijkstra’s algorithm is just one example of what we call a shortest-path algorithm. In a nutshell, these are algorithms that are used to find the shortest distance (or path) between vertices of a graph.
While the shortest path can be defined in various ways, such as time, cost, or distance, it usually refers to the distance between nodes. If you want to know more about how Dijkstra’s algorithm works and what you can use it for, you’re in the right place.
What Is Dijkstra’s Algorithm?
In simple terms, Dijkstra’s algorithm is a method for finding the shortest path between nodes in a graph. A source node is designated, and then the distance between this node and all other nodes is calculated using the algorithm.
As opposed to some other shortest-path algorithms, Dijkstra’s can only work with non-negative weighted graphs. That is, graphs where each node’s value represents some quantity, such as cost or distance, and all values are positive.
The Algorithm
Before we dive into how to implement Dijkstra’s algorithm using code, let’s run through the steps involved:
- First, we initialize an empty priority queue; let’s call this “Q”. This is the most efficient way to use the algorithm, as it ensures that we always select the next closest vertex to the source node.
- Since we don’t yet know the distances from each vertex to the source node, we initialize all the vertex distances to infinity.
- The distance of the source vertex, or source node, is also set to 0 as this is our starting point.
- The source node is then added to the priority queue with a priority of 0.
- To begin, the vertex with the smallest distance is removed from the queue. This vertex is called “u,” and neighboring vertices are designated “v.” The distance from the source node to each v through u is calculated, and if the distance is smaller than the current distance to v, the distance is updated.
- Once v is calculated, this is inserted into the priority queue with its priority set to be equivalent to its distance from the source vertex. This ensures that vertices with smaller distances are prioritized first, maintaining the efficiency of the algorithm by considering the closest unvisited vertices first.
Functioning of Dijkstra’s Algorithm

©History-Computer.com
Now that we’ve covered the basics behind Dijkstra’s algorithm, let’s take a look at how it would work with an example. First, take a graph with nodes arranged as such:
We have vertices A through F, which are connected to each other through various paths. These connections are also known as edges. If we choose A as our starting vertex, all path values for the other nodes are set to infinity, as previously discussed.
We then go to each vertex and update its path length. If the calculated distance to the adjacent node is less than the current distance to this vertex, the distance is updated.
If we set the priority of each vertex equal to its distance from the source, we avoid visiting nodes twice. When visiting an unvisited node, we go for the shortest path length first. This process is repeated until every node has been visited and the distance from the source node is calculated.
Implementation of Dijkstra’s Algorithm Using a Heap
It’s time to implement some code for Dijkstra’s algorithm into Python and see how this works. We’ll begin by using a simple implementation of a priority queue, known as a heap. The code can be represented as follows:
import heapq import sys def dijkstra_heap(adj_matrix, source): num_vertices = len(adj_matrix) distances = [sys.maxsize] * num_vertices distances[source] = 0 visited = [False] * num_vertices heap = [(0, source)] while heap: current_distance, current_vertex = heapq.heappop(heap) visited[current_vertex] = True for neighbor_vertex in range(num_vertices): weight = adj_matrix[current_vertex][neighbor_vertex] if not visited[neighbor_vertex] and weight != 0: distance = current_distance + weight if distance < distances[neighbor_vertex]: distances[neighbor_vertex] = distance heapq.heappush(heap, (distance, neighbor_vertex)) return distances adj_matrix = [ [0, 2, 2, 0, 0, 0], [2, 0, 2, 0, 0, 0], [2, 2, 0, 2, 2, 6], [0, 0, 2, 0, 2, 2], [0, 0, 2, 2, 0, 2], [0, 0, 6, 2, 2, 0] ] distances = dijkstra_heap(adj_matrix, 0) print(distances)
Explanation of the Code
Let’s break this code down step-by-step so we can understand what’s going on.
Importing Modules
Firstly, we’re importing the “heapq” and “sys” modules. The first module allows us to use the heap data structure, while the second allows us to use the maximum integer value. This is needed because we’re initializing the vertex distances to infinity.
Defining Dijkstra’s Algorithm
Next, we define “dijkstra_heap” as a function with two arguments — “adj_matrix” and “source”. These are a representation of the graph as an adjacency matrix and the index of the source node, respectively.
Visiting the Nodes
In this block, we’re calculating the number of vertices. Then, we’re initializing the distances from the source vertex to each destination vertex to infinity, as mentioned before. The source node’s distance is also set to zero.
A list is then created, called “visited,” which is initialized to “False.” This means that nodes are initially considered unvisited until we’ve calculated the distance between them and the source. A priority queue, called “heap,” is also created, with the beginning vertex and distance from the source node designated as a tuple.
Updating the Shortest Path Lengths
Next, we have a “while” loop that runs until the queue is empty. The first line takes the vertex from the heap that has the shortest distance from the source vertex, and the second line marks the vertex as visited once this distance is calculated.
We follow this with a “for” loop, which iterates over all neighbors of the vertex we’re currently concerned with. The weight of each edge is retrieved, and whether neighbor nodes have been visited is checked as well as if the weights are non-zero. The distance to the neighbor node is then calculated using the distance from the source to the current node.
We have another “if” statement, which checks if the distance calculated is shorter than the current shortest distance in our “distances” list. If so, the next line updates this value. Finally, the neighbor vertex, as well as its distance from the source, is added to the heap.
Output of Results
After this process finishes, the distances from the source node to all the other nodes in the graph are returned to the console.
The penultimate code block simply represents the graph we’re looking at as an adjacency matrix, where the value in cell {i, j} represents the weight of the edge that connects the two vertices.
The final code simply tells the system to compute the shortest path distances, and then print these to the console.
Implementation of Code

©History-Computer.com
As we can see from the screenshot, we receive an output of [0, 2, 2, 4, 4, 6], representing the shortest path distance from source node A to nodes B, C, D, E, and F.
Best and Worst Use Cases of Dijkstra’s Algorithm
Generally speaking, Dijkstra’s algorithm is seen as efficient, although some algorithms may perform better in specialized cases. With that said, let’s look at the time and space complexity of Dijkstra’s algorithm in the best, average and worst case.
Time Complexity of Dijkstra’s Algorithm
Case | Time Complexity |
---|---|
Best | O(E* log V), O(E + V log V) or O(1) |
Average | O(E + V log V) |
Worst | O(V2 log V), or O((E + V) log V) |
Usually, we have only one complexity for each case. But, with Dijkstra’s algorithm, the situation isn’t as clear-cut. This is because you can implement the algorithm in different ways, mostly through the use of a priority queue.
When using a priority queue, you could use either a binary heap method or a Fibonacci heap method. Generally, the Fibonacci heap leads to more efficient performance, and a better worst-case time complexity of O((E + V) log V), where E is the number of edges and V is the number of vertices.
If you’re not using a priority queue and the graph is dense, the worst-case complexity could be as inefficient as O(V2 log V).
Regarding the average case, the time complexity is a bit more favorable. The average case refers to when a priority queue is used to calculate the minimum distances to each vertex.
The best case, in strict terms, is an unusual one. You probably wouldn’t bother using the algorithm to solve it. This is because the best case would be where the source vertex is the same as the destination vertex. This means that the iteration terminates itself after only one loop.
The complexity is, in fact, identical to the average case. But since the values of both E and V are equal to 1, this is simplified to O(1). If you’re using a heap to implement your algorithm, then the best-case complexity changes to O(E* log V). This is because there will be a maximum of O(E) vertices in your queue.
Space Complexity of Dijkstra’s Algorithm
Case | Time Complexity |
---|---|
Best | O(V) |
Average | O(V + E) |
Worst | O(V2) |
As with the time complexity, the space complexity of the algorithm depends on your implementation. If you’re using an adjacency list to represent your graph and a Fibonacci heap or regular heap, the complexity is equal to O(V+E). This is because the vertex distance from the source, as well as the edges between the vertices, need to be stored in memory.
However, if you use an adjacency matrix instead, the complexity is reduced to O(V2). This is because you need to store the vertex distances as well as a V * V matrix to represent the edges.
Wrapping Up
To summarize, Dijkstra’s algorithm is one of many algorithms used for calculating the shortest path lengths in a graph and is very useful. While there are many ways to implement the algorithm, using a priority queue of some description is usually your best bet.
Of these methods, using a Fibonacci heap tends to be the most efficient, particularly for large graphs. For smaller graphs, however, a more simple priority queue may be more than sufficient.
Up Next
- Selection Sort Algorithm Explained, With Photos
- Machine Learning vs. Deep Learning: Full Explanation and Differences
- What Is Dynamic Programing, With Examples
The image featured at the top of this post is ©pinkeyes/Shutterstock.com.