Introduction to Algorithms. (2nd edition) Thomas Cormen_ Charles Leiserson_ Ronad Rivest (811417), страница 84
Текст из файла (страница 84)
Let us define, for each node x,Now we redefine the potential of a red-black tree T asand let T′ be the tree that results from applying any nonterminating case of RB-INSERTFIXUP or RB-DELETE-FIXUP to T .f. Show that Φ(T′) ≤ Φ(T) - 1 for all nonterminating cases of RB-INSERT-FIXUP.Argue that the amortized number of structural modifications performed by any call ofRB-INSERT-FIXUP is O(1).g. Show that Φ(T′) ≤ Φ(T) - 1 for all nonterminating cases of RB-DELETE-FIXUP.Argue that the amortized number of structural modifications performed by any call ofRB-DELETE-FIXUP is O(1).h. Complete the proof that in the worst case, any sequence of m RB-INSERT and RBDELETE operations performs O(m) structural modifications.[1]In some situations, such as an open-address hash table, we may wish to consider a table tobe full if its load factor equals some constant strictly less than 1.
(See Exercise 17.4-1.)Chapter notesAggregate analysis was used by Aho, Hopcroft, and Ullman [5]. Tarjan [293] surveys theaccounting and potential methods of amortized analysis and presents several applications. Heattributes the accounting method to several authors, including M. R. Brown, R. E. Tarjan, S.Huddleston, and K. Mehlhorn.
He attributes the potential method to D. D. Sleator. The term"amortized" is due to D. D. Sleator and R. E. Tarjan.Potential functions are also useful for proving lower bounds for certain types of problems. Foreach configuration of the problem, we define a potential function that maps the configurationto a real number. Then we determine the potential Φinit of the initial configuration, thepotential Φfinal of the final configuration, and the maximum change in potential ∆Φmax due toany step. The number of steps must therefore be at least |Φfinal - Φinit|/|∆Φmax|.
Examples of theuse of potential functions for proving lower bounds in I/O complexity appear in works byCormen [71], Floyd [91], and Aggarwal and Vitter [4]. Krumme, Cybenko, andVenkataraman [194] applied potential functions to prove lower bounds on gossiping:communicating a unique item from each vertex in a graph to every other vertex.Part V: Advanced Data StructuresChapter ListChapter 18: B-TreesChapter 19: Binomial HeapsChapter 20: Fibonacci HeapsChapter 21: Data Structures for Disjoint SetsIntroductionThis part returns to the examination of data structures that support operations on dynamic setsbut at a more advanced level than Part III.
Two of the chapters, for example, make extensiveuse of the amortized analysis techniques we saw in Chapter 17.Chapter 18 presents B-trees, which are balanced search trees specifically designed to bestored on magnetic disks. Because magnetic disks operate much more slowly than randomaccess memory, we measure the performance of B-trees not only by how much computingtime the dynamic-set operations consume but also by how many disk accesses are performed.For each B-tree operation, the number of disk accesses increases with the height of the B-tree,which is kept low by the B-tree operations.Chapters 19 and 20 give implementations of mergeable heaps, which support the operationsINSERT, MINIMUM, EXTRACT-MIN, and UNION.[1] The UNION operation unites, ormerges, two heaps.
The data structures in these chapters also support the operations DELETEand DECREASE-KEY.Binomial heaps, which appear in Chapter 19, support each of these operations in O(lg n)worst-case time, where n is the total number of elements in the input heap (or in the two inputheaps together in the case of UNION). When the UNION operation must be supported,binomial heaps are superior to the binary heaps introduced in Chapter 6, because it takes Θ(n)time to unite two binary heaps in the worst case.Fibonacci heaps, in Chapter 20, improve upon binomial heaps, at least in a theoretical sense.We use amortized time bounds to measure the performance of Fibonacci heaps. Theoperations INSERT, MINIMUM, and UNION take only O(1) actual and amortized time onFibonacci heaps, and the operations EXTRACT-MIN and DELETE take O(lg n) amortizedtime.
The most significant advantage of Fibonacci heaps, however, is that DECREASE-KEYtakes only O(1) amortized time. The low amortized time of the DECREASE-KEY operationis why Fibonacci heaps are key components of some of the asymptotically fastest algorithmsto date for graph problems.Finally, Chapter 21 presents data structures for disjoint sets. We have a universe of n elementsthat are grouped into dynamic sets. Initially, each element belongs to its own singleton set.The operation UNION unites two sets, and the query FIND-SET identifies the set that a givenelement is in at the moment. By representing each set by a simple rooted tree, we obtainsurprisingly fast operations: a sequence of m operations runs in O(m α(n)) time, where α(n) isan incredibly slowly growing function—α(n) is at most 4 in any conceivable application.
Theamortized analysis that proves this time bound is as complex as the data structure is simple.The topics covered in this part are by no means the only examples of "advanced" datastructures. Other advanced data structures include the following:•Dynamic trees, introduced by Sleator and Tarjan [281] and discussed by Tarjan [292],maintain a forest of disjoint rooted trees. Each edge in each tree has a real-valued cost.Dynamic trees support queries to find parents, roots, edge costs, and the minimumedge cost on a path from a node up to a root.
Trees may be manipulated by cuttingedges, updating all edge costs on a path from a node up to a root, linking a root intoanother tree, and making a node the root of the tree it appears in. One implementationof dynamic trees gives an O(lg n) amortized time bound for each operation; a more••••complicated implementation yields O(lg n) worst-case time bounds.
Dynamic trees areused in some of the asymptotically fastest network-flow algorithms.Splay trees, developed by Sleator and Tarjan [282] and discussed by Tarjan [292], area form of binary search tree on which the standard search-tree operations run in O(lgn) amortized time.
One application of splay trees simplifies dynamic trees.Persistent data structures allow queries, and sometimes updates as well, on pastversions of a data structure. Driscoll, Sarnak, Sleator, and Tarjan [82] presenttechniques for making linked data structures persistent with only a small time andspace cost. Problem 13-1 gives a simple example of a persistent dynamic set.Several data structures allow a faster implementation of dictionary operations(INSERT, DELETE, and SEARCH) for a restricted universe of keys. By takingadvantage of these restrictions, they are able to achieve better worst-case asymptoticrunning times than comparison-based data structures.
A data structure invented by vanEmde Boas [301] supports the operations MINIMUM, MAXIMUM, INSERT,DELETE, SEARCH, EXTRACT-MIN, EXTRACT-MAX, PREDECESSOR, andSUCCESSOR in worst-case time O(lg lg n), subject to the restriction that the universeof keys is the set {1, 2,..., n}. Fredman and Willard introduced fusion trees [99],which were the first data structure to allow faster dictionary operations when theuniverse is restricted to integers. They showed how to implement these operations inO(lg n/ lg lg n) time. Several subsequent data structures, including exponential searchtrees [16], have also given improved bounds on some or all of the dictionaryoperations and are mentioned in the chapter notes throughout this book.Dynamic graph data structures support various queries while allowing the structureof a graph to change through operations that insert or delete vertices or edges.Examples of the queries that are supported include vertex connectivity [144], edgeconnectivity, minimum spanning trees [143], biconnectivity, and transitive closure[142].Chapter notes throughout this book mention additional data structures.[1]As in Problem 10-2, we have defined a mergeable heap to support MINIMUM andEXTRACT-MIN, and so we can also refer to it as a mergeable min-heap.
Alternatively, if itsupported MAXIMUM and EXTRACT-MAX, it would be a mergeable max-heap. Unlesswe specify otherwise, mergeable heaps will be by default mergeable min-heaps.Chapter 18: B-TreesOverviewB-trees are balanced search trees designed to work well on magnetic disks or other directaccess secondary storage devices.
B-trees are similar to red-black trees (Chapter 13), but theyare better at minimizing disk I/O operations. Many database systems use B-trees, or variantsof B-trees, to store information.B-trees differ from red-black trees in that B-tree nodes may have many children, from ahandful to thousands.
That is, the "branching factor" of a B-tree can be quite large, although itis usually determined by characteristics of the disk unit used. B-trees are similar to red-blacktrees in that every n-node B-tree has height O(lg n), although the height of a B-tree can beconsiderably less than that of a red-black tree because its branching factor can be much larger.Therefore, B-trees can also be used to implement many dynamic-set operations in time O(lgn).B-trees generalize binary search trees in a natural manner. Figure 18.1 shows a simple B-tree.If an internal B-tree node x contains n[x] keys, then x has n[x] + 1 children.