Introduction to Algorithms. (2nd edition) Thomas Cormen_ Charles Leiserson_ Ronad Rivest (811417), страница 72
Текст из файла (страница 72)
If thestarting time of an activity occurs before the finish time of the most recently added activity(the arrow between them points left), it is rejected. Otherwise (the arrow points directly up orto the right), it is selected. The last recursive call, RECURSIVE-ACTIVITY-SELECTOR(s, f,11, 12), returns Ø. The resulting set of selected activities is {a1, a4, a8, a11}.Assuming that the activities have already been sorted by finish times, the running time of thecall RECURSIVE-ACTIVITY-SELECTOR(s, f, 0, n + 1) is Θ(n), which we can see asfollows.
Over all recursive calls, each activity is examined exactly once in the while loop testof line 2. In particular, activity ak is examined in the last call made in which i < k.An iterative greedy algorithmWe easily can convert our recursive procedure to an iterative one. The procedureRECURSIVE-ACTIVITY-SELECTOR is almost "tail recursive" (see Problem 7-4): it endswith a recursive call to itself followed by a union operation. It is usually a straightforward taskto transform a tail-recursive procedure to an iterative form; in fact, some compilers for certainprogramming languages perform this task automatically. As written, RECURSIVEACTIVITY-SELECTOR works for any subproblem Sij, but we have seen that we need toconsider only subproblems for which j = n + 1, i.e., subproblems that consist of the lastactivities to finish.The procedure GREEDY-ACTIVITY-SELECTOR is an iterative version of the procedureRECURSIVE-ACTIVITY-SELECTOR.
It also assumes that the input activities are orderedby monotonically increasing finish time. It collects selected activities into a set A and returnsthis set when it is done.GREEDY-ACTIVITY-SELECTOR(s, f)1 n ← length[s]2 A ← {a1}3 i ← 14 for m ← 2 to n5do if sm ≥ fi6then A ← A{am}7i ← m8 return AThe procedure works as follows. The variable i indexes the most recent addition to A,corresponding to the activity ai in the recursive version. Since the activities are considered inorder of monotonically increasing finish time, fi is always the maximum finish time of anyactivity in A. That is,(16.4)Lines 2–3 select activity a1, initialize A to contain just this activity, and initialize i to indexthis activity.
The for loop of lines 4–7 finds the earliest activity to finish in Si.n+1. The loopconsiders each activity am in turn and adds am to A if it is compatible with all previouslyselected activities; such an activity is the earliest to finish in Si.n+1. To see if activity am iscompatible with every activity currently in A, it suffices by equation (16.4) to check (line 5)that its start time sm is not earlier than the finish time fi of the activity most recently added toA. If activity am is compatible, then lines 6–7 add activity am to A and set i to m.
The set Areturned by the call GREEDY-ACTIVITY-SELECTOR(s, f) is precisely the set returned bythe call RECURSIVE-ACTIVITY-SELECTOR(s, f, 0, n + 1).Like the recursive version, GREEDY-ACTIVITY-SELECTOR schedules a set of n activitiesin Θ(n) time, assuming that the activities were already sorted initially by their finish times.Exercises 16.1-1Give a dynamic-programming algorithm for the activity-selection problem, based on therecurrence (16.3). Have your algorithm compute the sizes c[i, j] as defined above and alsoproduce the maximum-size subset A of activities. Assume that the inputs have been sorted asin equation (16.1). Compare the running time of your solution to the running time ofGREEDY-ACTIVITY-SELECTOR.Exercises 16.1-2Suppose that instead of always selecting the first activity to finish, we instead select the lastactivity to start that is compatible with all previously selected activities.
Describe how thisapproach is a greedy algorithm, and prove that it yields an optimal solution.Exercises 16.1-3Suppose that we have a set of activities to schedule among a large number of lecture halls. Wewish to schedule all the activities using as few lecture halls as possible. Give an efficientgreedy algorithm to determine which activity should use which lecture hall.(This is also known as the interval-graph coloring problem. We can create an interval graphwhose vertices are the given activities and whose edges connect incompatible activities. Thesmallest number of colors required to color every vertex so that no two adjacent vertices aregiven the same color corresponds to finding the fewest lecture halls needed to schedule all ofthe given activities.)Exercises 16.1-4Not just any greedy approach to the activity-selection problem produces a maximum-size setof mutually compatible activities.
Give an example to show that the approach of selecting theactivity of least duration from those that are compatible with previously selected activitiesdoes not work. Do the same for the approaches of always selecting the compatible activitythat overlaps the fewest other remaining activities and always selecting the compatibleremaining activity with the earliest start time.[1]We will sometimes speak of the sets Sij as subproblems rather than just sets of activities. Itwill always be clear from the context whether we are referring to Sij as a set of activities or thesubproblem whose input is that set.16.2 Elements of the greedy strategyA greedy algorithm obtains an optimal solution to a problem by making a sequence ofchoices.
For each decision point in the algorithm, the choice that seems best at the moment ischosen. This heuristic strategy does not always produce an optimal solution, but as we saw inthe activity-selection problem, sometimes it does. This section discusses some of the generalproperties of greedy methods.The process that we followed in Section 16.1 to develop a greedy algorithm was a bit moreinvolved than is typical. We went through the following steps:1. Determine the optimal substructure of the problem.2. Develop a recursive solution.3. Prove that at any stage of the recursion, one of the optimal choices is the greedychoice.
Thus, it is always safe to make the greedy choice.4. Show that all but one of the subproblems induced by having made the greedy choiceare empty.5. Develop a recursive algorithm that implements the greedy strategy.6. Convert the recursive algorithm to an iterative algorithm.In going through these steps, we saw in great detail the dynamic-programming underpinningsof a greedy algorithm. In practice, however, we usually streamline the above steps whendesigning a greedy algorithm.
We develop our substructure with an eye toward making agreedy choice that leaves just one subproblem to solve optimally. For example, in the activityselection problem, we first defined the subproblems Sij, where both i and j varied. We thenfound that if we always made the greedy choice, we could restrict the subproblems to be ofthe form Si.n+1.Alternatively, we could have fashioned our optimal substructure with a greedy choice inmind. That is, we could have dropped the second subscript and defined subproblems of theform Si = {ak S : fi ≤ sk}. Then, we could have proven that a greedy choice (the first activityam to finish in Si), combined with an optimal solution to the remaining set Sm of compatibleactivities, yields an optimal solution to Si.
More generally, we design greedy algorithmsaccording to the following sequence of steps:1. Cast the optimization problem as one in which we make a choice and are left with onesubproblem to solve.2. Prove that there is always an optimal solution to the original problem that makes thegreedy choice, so that the greedy choice is always safe.3. Demonstrate that, having made the greedy choice, what remains is a subproblem withthe property that if we combine an optimal solution to the subproblem with the greedychoice we have made, we arrive at an optimal solution to the original problem.We shall use this more direct process in later sections of this chapter.