Introduction to Algorithms. (2nd edition) Thomas Cormen_ Charles Leiserson_ Ronad Rivest (811417), страница 73
Текст из файла (страница 73)
Nevertheless, beneathevery greedy algorithm, there is almost always a more cumbersome dynamic-programmingsolution.How can one tell if a greedy algorithm will solve a particular optimization problem? There isno way in general, but the greedy-choice property and optimal sub-structure are the two keyingredients. If we can demonstrate that the problem has these properties, then we are well onthe way to developing a greedy algorithm for it.Greedy-choice propertyThe first key ingredient is the greedy-choice property: a globally optimal solution can bearrived at by making a locally optimal (greedy) choice.
In other words, when we areconsidering which choice to make, we make the choice that looks best in the current problem,without considering results from subproblems.Here is where greedy algorithms differ from dynamic programming. In dynamicprogramming, we make a choice at each step, but the choice usually depends on the solutionsto subproblems. Consequently, we typically solve dynamic-programming problems in abottom-up manner, progressing from smaller subproblems to larger subproblems.
In a greedyalgorithm, we make whatever choice seems best at the moment and then solve the subproblemarising after the choice is made. The choice made by a greedy algorithm may depend onchoices so far, but it cannot depend on any future choices or on the solutions to subproblems.Thus, unlike dynamic programming, which solves the subproblems bottom up, a greedystrategy usually progresses in a top-down fashion, making one greedy choice after another,reducing each given problem instance to a smaller one.Of course, we must prove that a greedy choice at each step yields a globally optimal solution,and this is where cleverness may be required.
Typically, as in the case of Theorem 16.1, theproof examines a globally optimal solution to some subproblem. It then shows that thesolution can be modified to use the greedy choice, resulting in one similar but smallersubproblem.The greedy-choice property often gains us some efficiency in making our choice in asubproblem.
For example, in the activity-selection problem, assuming that we had alreadysorted the activities in monotonically increasing order of finish times, we needed to examineeach activity just once. It is frequently the case that by preprocessing the input or by using anappropriate data structure (often a priority queue), we can make greedy choices quickly, thusyielding an efficient algorithm.Optimal substructureA problem exhibits optimal substructure if an optimal solution to the problem containswithin it optimal solutions to subproblems.
This property is a key ingredient of assessing theapplicability of dynamic programming as well as greedy algorithms. As an example ofoptimal substructure, recall how we demonstrated in Section 16.1 that if an optimal solutionto subproblem Sij includes an activity ak, then it must also contain optimal solutions to thesubproblems Sik and Skj. Given this optimal substructure, we argued that if we knew whichactivity to use as ak, we could construct an optimal solution to Sij by selecting ak along with allactivities in optimal solutions to the subproblems Sik and Skj.
Based on this observation ofoptimal substructure, we were able to devise the recurrence (16.3) that described the value ofan optimal solution.We usually use a more direct approach regarding optimal substructure when applying it togreedy algorithms. As mentioned above, we have the luxury of assuming that we arrived at asubproblem by having made the greedy choice in the original problem. All we really need todo is argue that an optimal solution to the subproblem, combined with the greedy choicealready made, yields an optimal solution to the original problem. This scheme implicitly usesinduction on the subproblems to prove that making the greedy choice at every step producesan optimal solution.Greedy versus dynamic programmingBecause the optimal-substructure property is exploited by both the greedy and dynamicprogramming strategies, one might be tempted to generate a dynamic-programming solutionto a problem when a greedy solution suffices, or one might mistakenly think that a greedysolution works when in fact a dynamic-programming solution is required.
To illustrate thesubtleties between the two techniques, let us investigate two variants of a classicaloptimization problem.The 0–1 knapsack problem is posed as follows. A thief robbing a store finds n items; the ithitem is worth vi dollars and weighs wi pounds, where vi and wi are integers. He wants to takeas valuable a load as possible, but he can carry at most W pounds in his knapsack for someinteger W. Which items should he take? (This is called the 0–1 knapsack problem becauseeach item must either be taken or left behind; the thief cannot take a fractional amount of anitem or take an item more than once.)In the fractional knapsack problem, the setup is the same, but the thief can take fractions ofitems, rather than having to make a binary (0–1) choice for each item.
You can think of anitem in the 0–1 knapsack problem as being like a gold ingot, while an item in the fractionalknapsack problem is more like gold dust.Both knapsack problems exhibit the optimal-substructure property. For the 0–1 problem,consider the most valuable load that weighs at most W pounds. If we remove item j from thisload, the remaining load must be the most valuable load weighing at most W - wj that the thiefcan take from the n - 1 original items excluding j. For the comparable fractional problem,consider that if we remove a weight w of one item j from the optimal load, the remaining loadmust be the most valuable load weighing at most W - w that the thief can take from the n - 1original items plus wj - w pounds of item j.Although the problems are similar, the fractional knapsack problem is solvable by a greedystrategy, whereas the 0–1 problem is not. To solve the fractional problem, we first computethe value per pound vi/wi for each item.
Obeying a greedy strategy, the thief begins by takingas much as possible of the item with the greatest value per pound. If the supply of that item isexhausted and he can still carry more, he takes as much as possible of the item with the nextgreatest value per pound, and so forth until he can't carry any more. Thus, by sorting the itemsby value per pound, the greedy algorithm runs in O(n lg n) time. The proof that the fractionalknapsack problem has the greedy-choice property is left as Exercise 16.2-1.To see that this greedy strategy does not work for the 0–1 knapsack problem, consider theproblem instance illustrated in Figure 16.2(a).
There are 3 items, and the knapsack can hold50 pounds. Item 1 weighs 10 pounds and is worth 60 dollars. Item 2 weighs 20 pounds and isworth 100 dollars. Item 3 weighs 30 pounds and is worth 120 dollars. Thus, the value perpound of item 1 is 6 dollars per pound, which is greater than the value per pound of eitheritem 2 (5 dollars per pound) or item 3 (4 dollars per pound). The greedy strategy, therefore,would take item 1 first. As can be seen from the case analysis in Figure 16.2(b), however, theoptimal solution takes items 2 and 3, leaving 1 behind.
The two possible solutions thatinvolve item 1 are both suboptimal.Figure 16.2: The greedy strategy does not work for the 0–1 knapsack problem. (a) The thiefmust select a subset of the three items shown whose weight must not exceed 50 pounds. (b)The optimal subset includes items 2 and 3. Any solution with item 1 is suboptimal, eventhough item 1 has the greatest value per pound. (c) For the fractional knapsack problem,taking the items in order of greatest value per pound yields an optimal solution.For the comparable fractional problem, however, the greedy strategy, which takes item 1 first,does yield an optimal solution, as shown in Figure 16.2(c). Taking item 1 doesn't work in the0–1 problem because the thief is unable to fill his knapsack to capacity, and the empty spacelowers the effective value per pound of his load.
In the 0–1 problem, when we consider anitem for inclusion in the knapsack, we must compare the solution to the subproblem in whichthe item is included with the solution to the subproblem in which the item is excluded beforewe can make the choice. The problem formulated in this way gives rise to many over-lappingsubproblems—a hallmark of dynamic programming, and indeed, dynamic programming canbe used to solve the 0–1 problem. (See Exercise 16.2-2.)Exercises 16.2-1Prove that the fractional knapsack problem has the greedy-choice property.Exercises 16.2-2Give a dynamic-programming solution to the 0–1 knapsack problem that runs in O(n W) time,where n is number of items and W is the maximum weight of items that the thief can put in hisknapsack.Exercises 16.2-3Suppose that in a 0–1 knapsack problem, the order of the items when sorted by increasingweight is the same as their order when sorted by decreasing value.