Introduction to Algorithms. (2nd edition) Thomas Cormen_ Charles Leiserson_ Ronad Rivest (811417), страница 59
Текст из файла (страница 59)
This process continues until all n peoplehave been removed. The order in which the people are removed from the circle defines the (n,m)-Josephus permutation of the integers 1, 2,..., n. For example, the (7, 3)-Josephuspermutation is 3, 6, 2, 7, 5, 1, 4 .a. Suppose that m is a constant. Describe an O(n)-time algorithm that, given an integer n,outputs the (n, m)-Josephus permutation.b. Suppose that m is not a constant. Describe an O(n lg n)-time algorithm that, givenintegers n and m, outputs the (n, m)-Josephus permutation.Chapter NotesIn their book, Preparata and Shamos [247] describe several of the interval trees that appear inthe literature, citing work by H.
Edelsbrunner (1980) and E. M. Mc-Creight (1981). The bookdetails an interval tree for which, given a static database of n intervals, all k intervals thatoverlap a given query interval can be enumerated in O(k + lg n) time.Part IV: Advanced Design and AnalysisTechniquesChapter ListChapter 15: Dynamic ProgrammingChapter 16: Greedy AlgorithmsChapter 17: Amortized AnalysisIntroductionThis part covers three important techniques for the design and analysis of efficient algorithms:dynamic programming (Chapter 15), greedy algorithms (Chapter 16), and amortized analysis(Chapter 17).
Earlier parts have presented other widely applicable techniques, such as divideand-conquer, randomization, and the solution of recurrences. The new techniques aresomewhat more sophisticated, but they are useful for effectively attacking manycomputational problems. The themes introduced in this part will recur later in the book.Dynamic programming typically applies to optimization problems in which a set of choicesmust be made in order to arrive at an optimal solution. As choices are made, subproblems ofthe same form often arise.
Dynamic programming is effective when a given subproblem mayarise from more than one partial set of choices; the key technique is to store the solution toeach such subproblem in case it should reappear. Chapter 15 shows how this simple idea cansometimes transform exponential-time algorithms into polynomial-time algorithms.Like dynamic-programming algorithms, greedy algorithms typically apply to optimizationproblems in which a set of choices must be made in order to arrive at an optimal solution. Theidea of a greedy algorithm is to make each choice in a locally optimal manner. A simpleexample is coin-changing: to minimize the number of U.S.
coins needed to make change for agiven amount, it suffices to select repeatedly the largest-denomination coin that is not largerthan the amount still owed. There are many such problems for which a greedy approachprovides an optimal solution much more quickly than would a dynamic-programmingapproach. It is not always easy to tell whether a greedy approach will be effective, however.Chapter 16 reviews matroid theory, which can often be helpful in making such adetermination.Amortized analysis is a tool for analyzing algorithms that perform a sequence of similaroperations. Instead of bounding the cost of the sequence of operations by bounding the actualcost of each operation separately, an amortized analysis can be used to provide a bound on theactual cost of the entire sequence.
One reason this idea can be effective is that it may beimpossible in a sequence of operations for all of the individual operations to run in theirknown worst-case time bounds. While some operations are expensive, many others might becheap. Amortized analysis is not just an analysis tool, however; it is also a way of thinkingabout the design of algorithms, since the design of an algorithm and the analysis of its runningtime are often closely intertwined. Chapter 17 introduces three ways to perform an amortizedanalysis of an algorithm.Chapter 15: Dynamic ProgrammingOverviewDynamic programming, like the divide-and-conquer method, solves problems by combiningthe solutions to subproblems. ("Programming" in this context refers to a tabular method, notto writing computer code.) As we saw in Chapter 2, divide-and-conquer algorithms partitionthe problem into independent subproblems, solve the subproblems recursively, and thencombine their solutions to solve the original problem.
In contrast, dynamic programming isapplicable when the subproblems are not independent, that is, when subproblems sharesubsubproblems. In this context, a divide-and-conquer algorithm does more work thannecessary, repeatedly solving the common subsubproblems. A dynamic-programmingalgorithm solves every subsubproblem just once and then saves its answer in a table, therebyavoiding the work of recomputing the answer every time the subsubproblem is encountered.Dynamic programming is typically applied to optimization problems.
In such problems therecan be many possible solutions. Each solution has a value, and we wish to find a solution withthe optimal (minimum or maximum) value. We call such a solution an optimal solution to theproblem, as opposed to the optimal solution, since there may be several solutions that achievethe optimal value.The development of a dynamic-programming algorithm can be broken into a sequence of foursteps.1.2.3.4.Characterize the structure of an optimal solution.Recursively define the value of an optimal solution.Compute the value of an optimal solution in a bottom-up fashion.Construct an optimal solution from computed information.Steps 1–3 form the basis of a dynamic-programming solution to a problem.
Step 4 can beomitted if only the value of an optimal solution is required. When we do perform step 4, wesometimes maintain additional information during the computation in step 3 to ease theconstruction of an optimal solution.The sections that follow use the dynamic-programming method to solve some optimizationproblems. Section 15.1 examines a problem in scheduling two automobile assembly lines,where after each station, the auto under construction can stay on the same line or move to theother.
Section 15.2 asks how we can multiply a chain of matrices so that the fewest totalscalar multiplications are performed. Given these examples of dynamic programming, Section15.3 discusses two key characteristics that a problem must have for dynamic programming tobe a viable solution technique. Section 15.4 then shows how to find the longest commonsubsequence of two sequences.
Finally, Section 15.5 uses dynamic programming to constructbinary search trees that are optimal, given a known distribution of keys to be looked up.15.1 Assembly-line schedulingOur first example of dynamic programming solves a manufacturing problem.
The ColonelMotors Corporation produces automobiles in a factory that has two assembly lines, shown inFigure 15.1. An automobile chassis enters each assembly line, has parts added to it at anumber of stations, and a finished auto exits at the end of the line. Each assembly line has nstations, numbered j = 1, 2, ..., n. We denote the jth station on line i (where i is 1 or 2) by Si,j.The jth station on line 1 (S1,j) performs the same function as the jth station on line 2 (S2,j).
Thestations were built at different times and with different technologies, however, so that the timerequired at each station varies, even between stations at the same position on the two differentlines. We denote the assembly time required at station Si,j by ai,j. As Figure 15.1 shows, achassis enters station 1 of one of the assembly lines, and it progresses from each station to thenext.
There is also an entry time ei for the chassis to enter assembly line i and an exit time xifor the completed auto to exit assembly line i.Figure 15.1: A manufacturing problem to find the fastest way through a factory. There are twoassembly lines, each with n stations; the jth station on line i is denoted Si,j and the assemblytime at that station is ai,j. An automobile chassis enters the factory, and goes onto line i (wherei = 1 or 2), taking ei time. After going through the jth station on a line, the chassis goes on tothe (j + 1)st station on either line. There is no transfer cost if it stays on the same line, but ittakes time ti,j to transfer to the other line after station Si,j.
After exiting the nth station on aline, it takes xi time for the completed auto to exit the factory. The problem is to determinewhich stations to choose from line 1 and which to choose from line 2 in order to minimize thetotal time through the factory for one auto.Normally, once a chassis enters an assembly line, it passes through that line only. The time togo from one station to the next within the same assembly line is negligible. Occasionally aspecial rush order comes in, and the customer wants the automobile to be manufactured asquickly as possible. For rush orders, the chassis still passes through the n stations in order, butthe factory manager may switch the partially-completed auto from one assembly line to theother after any station.
The time to transfer a chassis away from assembly line i after havinggone through station Si,j is ti,j, where i = 1, 2 and j = 1, 2, ..., n - 1 (since after the nth station,assembly is complete). The problem is to determine which stations to choose from line 1 andwhich to choose from line 2 in order to minimize the total time through the factory for oneauto.