Kosaraju’s algorithm is amazingly simple. It uses a very clever trick based on the fact that
if you reverse all of the edges in a graph, the resulting graph has the same strongly connected components as the original. So using that, we can get the SCCs by doing a forward traversal to find an ordering of vertices, then doing a traversal of the reverse of the graph in the order generated by the first traversal.
That may sound a bit mysterious, but it’s really very simple. Take the graph G, and
do a recursive depth-first traversal of the graph, along with an initially empty stack of vertices. As the recursion started at a vertex V finishes, push V onto the stack. At the end of the traversal, you’ll
have all of the vertices of G in the stack. The order of the reverse traversal will be starting
with the vertices on the top of that stack.
So you reverse all of the edges in the graph, creating the reverse graph, G’. Start with the
vertex on top of the stack, and to a traversal from that vertex. All of the nodes reachable from
that vertex form one strongly connected component. Remove everything in that SCC from the stack, and
then repeat the process with the new top of the stack. When the stack is empty, you’ll have accumulated all of the SCCs.
As usual, things are a whole lot clearer with an example. Let’s do that using the graph to the right as an example.
First, we do a depth-first traversal of the graph, putting vertices on a stack as we finish the
recursive step started with that node. We’ll start the traversal with “A”. A,F,E,C,G,H – H is finished,
so it goes on the stack. Then G goes on the stack; then C, then E, then F, and we’re back to A. So
at this point, the stack is [H, G, C, E, F]. Then we keep going from A: A, B, D. D is done, so it goes
on the stack; then B, then A. So the final stack is [H, G, C, E, F, D, B, A]
Then, we reverse the graph, and start traversing from whatever is on top of the stack. So first, we start from A in the reversed graph. That gives us A, D, B. So {A, D, B} form one strongly connected
component. Now, the top of the stack that hasn’t been traversed yet is F. So we do F, E. {F,E} is
another SCC. The remaining stack is now [H, G, C], which traverses straightforwardly as C, H, G. So
we end up with {A, B, D}, {E, F}, and {C, G, H} as our SCCs.
What’s the time complexity? We need to do two traversals, and one graph reversal. If we’re using an adjacency list representation, we can create the reverse graph while we do the first traversal, so it’s really just two depth-first traversals. So two times the complexity of a traversal; in adjacency list representation, traversal is O(V+E), where E is the number of edges – so Kosaraju’s SCC algorithm is also O(V+E). (If you use an adjacency matrix, then it’s O(N2).)
Can we do better than that? Order of complexity, no. You can’t do better than the cost of a single traversal of the graph. You can eliminate the second traversal. Instead of doing that second
traversal, you can do the forward traversal using the stack, and then pull things off the stack
checking if they are the root of a strongly connected component, and if so, removing all members
of that component. Tarjan’s algorithm works this way, and ends up doing one traversal, but considering
each edge in the graph twice. So it’s slightly, but not dramatically faster. It’s generally preferred
just because it avoids the computation of the reverse graph.
In terms of what we were talking about in the last post, this is good news. The computation of the SCCs is quite fast – quite a lot better than NP-complete. So we can, feasibly, use the division into SCCs as the basis of parallelization of graph algorithms.
As usual, all diagrams were drawn using OmniGraffle.
Read full article from Computing Strongly Connected Components – Good Math, Bad Math