Introduction to Algorithms. (2nd edition) Thomas Cormen_ Charles Leiserson_ Ronad Rivest (811417), страница 100
Текст из файла (страница 100)
Argue that line 10 is executed exactly once for each pair {u, v} P.b. Argue that at the time of the call LCA(u), the number of sets in the disjoint-set datastructure is equal to the depth of u in T.c. Prove that LCA correctly prints the least common ancestor of u and v for each pair {u,v} P.d. Analyze the running time of LCA, assuming that we use the implementation of thedisjoint-set data structure in Section 21.3.Chapter notesMany of the important results for disjoint-set data structures are due at least in part to R.
E.Tarjan. Using aggregate analysis, Tarjan [290, 292] gave the first tight upper bound in termsof the very slowly growing inverseof Ackermann's function. (The function Ak(j) givenin Section 21.4 is similar to Ackermann's function, and the function α(n) is similar to theinverse. Both α(n) andare at most 4 for all conceivable values of m and n.) An O(m lg*n) upper bound was proven earlier by Hopcroft and Ullman [5, 155]. The treatment in Section21.4 is adapted from a later analysis by Tarjan [294], which is in turn based on an analysis byKozen [193].
Harfst and Reingold [139] give a potential-based version of Tarjan's earlierbound.Tarjan and van Leeuwen [295] discuss variants on the path-compression heuristic, including"one-pass methods," which sometimes offer better constant factors in their performance thando two-pass methods. As with Tarjan's earlier analyses of the basic path-compressionheuristic, the analyses by Tarjan and van Leeuwen are aggregate. Harfst and Reingold [139]later showed how to make a small change to the potential function to adapt their pathcompression analysis to these one-pass variants. Gabow and Tarjan [103] show that in certainapplications, the disjoint-set operations can be made to run in O(m) time.time is required for operations on anyTarjan [291] showed that a lower bound ofdisjoint-set data structure satisfying certain technical conditions. This lower bound was latergeneralized by Fredman and Saks [97], who showed that in the worst case,(lg n)-bitwords of memory must be accessed.Part VI: Graph AlgorithmsChapter ListChapter 22: Elementary Graph AlgorithmsChapter 23: Minimum Spanning TreesChapter 24: Single-Source Shortest PathsChapter 25: All-Pairs Shortest PathsChapter 26: Maximum FlowIntroductionGraphs are a pervasive data structure in computer science, and algorithms for working withthem are fundamental to the field.
There are hundreds of interesting computational problemsdefined in terms of graphs. In this part, we touch on a few of the more significant ones.Chapter 22 shows how we can represent a graph on a computer and then discusses algorithmsbased on searching a graph using either breadth-first search or depth-first search.
Twoapplications of depth-first search are given: topologically sorting a directed acyclic graph anddecomposing a directed graph into its strongly connected components.Chapter 23 describes how to compute a minimum-weight spanning tree of a graph. Such atree is defined as the least-weight way of connecting all of the vertices together when eachedge has an associated weight. The algorithms for computing minimum spanning trees aregood examples of greedy algorithms (see Chapter 16).Chapters 24 and 25 consider the problem of computing shortest paths between vertices wheneach edge has an associated length or "weight." Chapter 24 considers the computation ofshortest paths from a given source vertex to all other vertices, and Chapter 25 considers thecomputation of shortest paths between every pair of vertices.Finally, Chapter 26 shows how to compute a maximum flow of material in a network(directed graph) having a specified source of material, a specified sink, and specifiedcapacities for the amount of material that can traverse each directed edge.
This generalproblem arises in many forms, and a good algorithm for computing maximum flows can beused to solve a variety of related problems efficiently.In describing the running time of a graph algorithm on a given graph G = (V, E), we usuallymeasure the size of the input in terms of the number of vertices |V| and the number of edges|E| of the graph.
That is, there are two relevant parameters describing the size of the input, notjust one. We adopt a common notational convention for these parameters. Inside asymptoticnotation (such as O-notation or Θ-notation), and only inside such notation, the symbol Vdenotes |V| and the symbol E denotes |E|. For example, we might say, "the algorithm runs intime O(V E)," meaning that the algorithm runs in time O(|V||E|). This convention makes therunning-time formulas easier to read, without risk of ambiguity.Another convention we adopt appears in pseudocode.
We denote the vertex set of a graph Gby V [G] and its edge set by E[G]. That is, the pseudocode views vertex and edge sets asattributes of a graph.Chapter 22: Elementary Graph AlgorithmsThis chapter presents methods for representing a graph and for searching a graph. Searching agraph means systematically following the edges of the graph so as to visit the vertices of thegraph. A graph-searching algorithm can discover much about the structure of a graph. Manyalgorithms begin by searching their input graph to obtain this structural information. Othergraph algorithms are organized as simple elaborations of basic graph-searching algorithms.Techniques for searching a graph are at the heart of the field of graph algorithms.Section 22.1 discusses the two most common computational representations of graphs: asadjacency lists and as adjacency matrices.
Section 22.2 presents a simple graph-searchingalgorithm called breadth-first search and shows how to create a breadth-first tree. Section 22.3presents depth-first search and proves some standard results about the order in which depthfirst search visits vertices. Section 22.4 provides our first real application of depth-first search:topologically sorting a directed acyclic graph. A second application of depth-first search,finding the strongly connected components of a directed graph, is given in Section 22.5.22.1 Representations of graphsThere are two standard ways to represent a graph G = (V, E): as a collection of adjacency listsor as an adjacency matrix. Either way is applicable to both directed and undirected graphs.The adjacency-list representation is usually preferred, because it provides a compact way torepresent sparse graphs-those for which |E| is much less than |V|2. Most of the graphalgorithms presented in this book assume that an input graph is represented in adjacency-listform.
An adjacency-matrix representation may be preferred, however, when the graph isdense-|E| is close to |V|2-or when we need to be able to tell quickly if there is an edgeconnecting two given vertices. For example, two of the all-pairs shortest-paths algorithmspresented in Chapter 25 assume that their input graphs are represented by adjacency matrices.The adjacency-list representation of a graph G = (V, E) consists of an array Adj of |V| lists,one for each vertex in V . For each u V, the adjacency list Adj[u] contains all the vertices vsuch that there is an edge (u, v) E.
That is, Adj[u] consists of all the vertices adjacent to u inG. (Alternatively, it may contain pointers to these vertices.) The vertices in each adjacency listare typically stored in an arbitrary order. Figure 22.1(b) is an adjacency-list representation ofthe undirected graph in Figure 22.1(a). Similarly, Figure 22.2(b) is an adjacency-listrepresentation of the directed graph in Figure 22.2(a).Figure 22.1: Two representations of an undirected graph. (a) An undirected graph G havingfive vertices and seven edges.
(b) An adjacency-list representation of G. (c) The adjacencymatrix representation of G.Figure 22.2: Two representations of a directed graph. (a) A directed graph G having sixvertices and eight edges. (b) An adjacency-list representation of G. (c) The adjacency-matrixrepresentation of G.If G is a directed graph, the sum of the lengths of all the adjacency lists is |E|, since an edge ofthe form (u, v) is represented by having v appear in Adj[u]. If G is an undirected graph, thesum of the lengths of all the adjacency lists is 2 |E|, since if (u, v) is an undirected edge, then uappears in v's adjacency list and vice versa.
For both directed and undirected graphs, theadjacency-list representation has the desirable property that the amount of memory it requiresis Θ(V + E).Adjacency lists can readily be adapted to represent weighted graphs, that is, graphs for whicheach edge has an associated weight, typically given by a weight function w : E → R. Forexample, let G = (V, E) be a weighted graph with weight function w. The weight w(u, v) of theedge (u, v) E is simply stored with vertex v in u's adjacency list.