Dijkstra's Algorithm
Do it by hand
Do the following two things.
- Find a path from the vertex labeled to the vertex labeled .
- Find a shortest-paths tree from the vertex labeled . (i.e., find the shortest path from to every single vertex in the graph.)
- Try to come up with an algorithm to do this.
(The solutions to 1 and 2 are at the end of this section.)
Observations
Note that the shortest path (for a graph whose edges have weights) can have many, many edges. What we care to minimize is the sum of the weights of the edges on the selected path.
Secondly, note the fact that the shortest paths tree from a source can be created in the following way:
- For every vertex (which is not ) in the graph, find the shortest path from to .
- "Combine"/"Union" all the edges that you found above. Tada!
Thirdly, note that the "Shortest Path Tree" will always be a tree. Why? Well, let's think about our original solution, where we maintained an array. For every node, there was exactly one "parent" in the array. (Why does this imply that the "Shortest Path Tree" will be a tree? Hint: A tree has edges, where is the number of nodes in the tree.)
Dijkstra's Algorithm [[/ˈdaɪkstrə/]]
Dijkstra's algorithm takes in an input vertex , and outputs the shortest path tree from . How does it work?
- Create a priority queue.
- Add to the priority queue with priority . Add all other vertices to the priority queue with priority .
- While the priority queue is not empty: pop a vertex out of the priority queue, and relax all of the edges going out from the vertex.
What does it mean to relax?
Suppose the vertex we just popped from the priority queue was . We'll look at all of 's edges. Say, we're looking at edge (the edge that goes from to ). We're going to try and relax this edge.
What that means is: Look at your current best distance to from the source, call it . Now, look at your (let's call it .
Is better, i.e., smaller than ? In that case, set , and update the to be .
Important note: we never relax edges that point to already visited vertices.
This whole process of calculating the potential distance, checking if it's better, and potentially updating is called relaxing.
Alternate definition is captured by the following image.
Pseudocode
def dijkstras(source):
PQ.add(source, 0)
For all other vertices, v, PQ.add(v, infinity)
while PQ is not empty:
p = PQ.removeSmallest()
relax(all edges from p)
def relax(edge p,q):
if q is visited (i.e., q is not in PQ):
return
if distTo[p] + weight(edge) < distTo[q]:
distTo[q] = distTo[p] + w
edgeTo[q] = p
PQ.changePriority(q, distTo[q])
Guarantees
As long as the edges are all non-negative, Dijkstra's is guaranteed to be optimal.
Proofs and Intuitions
Assume all edges are non-negative.
- At start, distTo[source] = 0. This is optimal.
- After relaxing all edges from source, let vertex be the vertex with the minimum weight (i.e., the one that's closest to the source.) Claim: distTo[] is optimal, i.e., whatever the value of distTo[] is at this point is the shortest distance from to . Why?
- Let's try to see why this MUST be the case.
- Suppose that it isn't the case. Then that means that there is some other path from to which is shorter than the direct path . Ok, so let's consider this hypothetical cool shorter path... it would have to look like . But... is already bigger than . (Note that this is true because is the vertex that is closest to from above.) So how can such a path exist which is actually shorter? It can't!
- Now, the next vertex to be popped will be . (Why? Note that it currently has the lowest priority in the PQ!)
- So now, we can make this same argument for and all the relaxation it does. (This is called "proof by induction". It's kind of like recursion for proofs.) And that's it; we're done.
Negative Edges?
Things can go pretty badly when negative edges come into the picture. Consider the following image.
Suppose you're at that vertex labeled . Now you're going to try to relax all your edges. You have only one outgoing edge from yourself to with weight . Ah, but note: vertex is already visited (it's marked with white.) So... we don't relax it. (Recall the pseudocode for the relax method.)
Now we go home thinking that the shortest distance to is (marked in pink.) But really, we should have taken the path through because that would have given us a distance of . Oops.
Dijkstra's algorithm is not guaranteed to be correct for negative edges. It might work... but it isn't guaranteed to work.
Try this out: suppose that your graph has negative edges, but all the negative edges only go out of the source vertex that you were passed in. Does Dijkstra's work? Why / Why not?
A noteworthy invariant
Observe that once a vertex is popped off the priority queue, it is never re-added. Its distance is never re-updated. So, in other words, once a vertex is popped from the priority queue, we know the true shortest distance to that vertex from the source.
One nice consequence of this fact is "short-circuiting". Suppose... that I didn't care about the shortest-paths tree, but just wanted to find the shortest path from some source to some other target. Suppose that you wanted to take, like, the cities of the world on a graph, and find the shortest path from Berkeley to Oakland. Running dijkstra(Berkeley)
will mean that you can't actually stop this powerful beast of an algorithm... you have to let it run... till it finds the shortest path to LA, and Houston, and New York City, and everywhere possible!
Well. Once Oakland
is popped off the priority queue in the algorithm, we can just stop. We can just return the distance and the path we have at that point, and it will be correct. So sometimes dijkstra
takes in not only a source, but also a target. This is for the purposes of short-circuiting.
The promised solutions