Introduction to Algorithms. (2nd edition) Thomas Cormen_ Charles Leiserson_ Ronad Rivest (811417), страница 71
Текст из файла (страница 71)
We have seen that any solution to anonempty subproblem Sij includes some activity ak, and that any optimal solution containswithin it optimal solutions to subproblem instances Sik and Skj. Thus, we can build anmaximum-size subset of mutually compatible activities in Sij by splitting the problem into twosubproblems (finding maximum-size subsets of mutually compatible activities in Sik and Skj),finding maximum-size subsets Aik and Akj of mutually compatible activities for thesesubproblems, and forming our maximum-size subset Aij of mutually compatible activities as(16.2)An optimal solution to the entire problem is a solution to S0,n+1.A recursive solutionThe second step in developing a dynamic-programming solution is to recursively define thevalue of an optimal solution.
For the activity-selection problem, we let c[i, j] be the number ofactivities in a maximum-size subset of mutually compatible activities in Sij. We have c[i, j] =0 whenever Sij = Ø; in particular, c[i, j] = 0 for i ≥ j.Now consider a nonempty subset Sij. As we have seen, if ak is used in a maximum-size subsetof mutually compatible activities of Sij, we also use maximum-size subsets of mutuallycompatible activities for the subproblems Sik and Skj. Using equation (16.2), we have therecurrencec[i, j ] = c[i, k] + c[k, j ] + 1.This recursive equation assumes that we know the value of k, which we do not. There are j - i- 1 possible values for k, namely k = i + 1, ..., j - 1. Since the maximum-size subset of Sij mustuse one of these values for k, we check them all to find the best.
Thus, our full recursivedefinition of c[i, j] becomes(16.3)Converting a dynamic-programming solution to a greedy solutionAt this point, it would be a straightforward exercise to write a tabular, bottom-up, dynamicprogramming algorithm based on recurrence (16.3). In fact, Exercise 16.1-1 asks you to dojust that. There are two key observations, however, that allow us to simplify our solution.Theorem 16.1Consider any nonempty subproblem Sij, and let am be the activity in Sij with the earliest finishtime:fm = min {fk : akSij}.Then1.
Activity am is used in some maximum-size subset of mutually compatible activities ofSij.2. The subproblem Sim is empty, so that choosing am leaves the subproblem Smj as theonly one that may be nonempty.Proof We shall prove the second part first, since it's a bit simpler. Suppose that Sim isnonempty, so that there is some activity ak such that fi ≤ sk < fk ≤ sm < fm.
Then ak is also in Sijand it has an earlier finish time than am, which contradicts our choice of am. We conclude thatSim is empty.To prove the first part, we suppose that Aij is a maximum-size subset of mutually compatibleactivities of Sij, and let us order the activities in Aij in monotonically increasing order of finishtime.
Let ak be the first activity in Aij. If ak = am, we are done, since we have shown that am isused in some maximum-size subset of mutually compatible activities of Sij. If ak ≠ am, weThe activities in are disjoint, since the activities in Aijconstruct the subsetare, ak is the first activity in Aij to finish, and fm ≤ fk. Noting that has the same number ofactivities as Aij, we see that is a maximum-size subset of mutually compatible activities ofSij that includes am.Why is Theorem 16.1 so valuable? Recall from Section 15.3 that optimal sub-structure variesin how many subproblems are used in an optimal solution to the original problem and in howmany choices we have in determining which subproblems to use.
In our dynamicprogramming solution, two subproblems are used in an optimal solution, and there are j-i-1choices when solving the subproblem Sij. Theorem 16.1 reduces both of these quantitiessignificantly: only one subproblem is used in an optimal solution (the other subproblem isguaranteed to be empty), and when solving the subproblem Sij, we need consider only onechoice: the one with the earliest finish time in Sij. Fortunately, we can easily determine whichactivity this is.In addition to reducing the number of subproblems and the number of choices, Theorem 16.1yields another benefit: we can solve each subproblem in a top-down fashion, rather than thebottom-up manner typically used in dynamic programming.
To solve the subproblem Sij, wechoose the activity am in Sij with the earliest finish time and add to this solution the set ofactivities used in an optimal solution to the subproblem Sij. Because we know that, havingchosen am, we will certainly be using a solution to Smj in our optimal solution to Sij, we do notneed to solve Smj before solving Sij. To solve Sij, we can first choose am as the activity in Sijwith the earliest finish time and then solve Smj.Note also that there is a pattern to the subproblems that we solve. Our original problem is S =S0.n+1. Suppose that we choose as the activity in S0.n+1 with the earliest finish time.
(Sincewe have sorted activities by monotonically increasing finish times and f0 = 0, we must havem1 = 1.) Our next subproblem is. Now suppose that we choose as the activity inwith the earliest finish time. (It is not necessarily the case that m2 = 2.) Our next subproblem. Continuing, we see that each subproblem will be of the formfor some activityisnumber mi. In other words, each subproblem consists of the last activities to finish, and thenumber of such activities varies from subproblem to subproblem.There is also a pattern to the activities that we choose. Because we always choose the activitywith the earliest finish time in, the finish times of the activities chosen over allsubproblems will be strictly increasing over time.
More-over, we can consider each activityjust once overall, in monotonically increasing order of finish times.The activity am that we choose when solving a subproblem is always the one with the earliestfinish time that can be legally scheduled. The activity picked is thus a "greedy" choice in thesense that, intuitively, it leaves as much opportunity as possible for the remaining activities tobe scheduled. That is, the greedy choice is the one that maximizes the amount of unscheduledtime remaining.A recursive greedy algorithmNow that we have seen how to streamline our dynamic-programming solution, and how totreat it as a top-down method, we are ready to see an algorithm that works in a purely greedy,top-down fashion.
We give a straightforward, recursive solution as the procedureRECURSIVE-ACTIVITY-SELECTOR. It takes the start and finish times of the activities,represented as arrays s and f, as well as the starting indices i and j of the subproblem Si.j it is tosolve. It returns a maximum-size set of mutually compatible activities in Si.j. We assume thatthe n input activities are ordered by monotonically increasing finish time, according toequation (16.1). If not, we can sort them into this order in O(n lg n) time, breaking tiesarbitrarily.
The initial call is RECURSIVE-ACTIVITY-SELECTOR(s, f, 0, n + 1).RECURSIVE-ACTIVITY-SELECTOR(s, f, i, j)1 m ← i + 12 while m < j and sm < fi ▹ Find the first activity in Sij.3do m ← m + 14 if m < j5then return {am}RECURSIVE-ACTIVITY-SELECTOR(s, f, m, j)6else return ØFigure 16.1 shows the operation of the algorithm. In a given recursive call RECURSIVEACTIVITY-SELECTOR(s, f, i, j), the while loop of lines 2–3 looks for the first activity in Sij.The loop examines ai+1, ai+2, ..., aj-1, until it finds the first activity am that is compatible withai; such an activity has sm ≥ fi.
If the loop terminates because it finds such an activity, theprocedure returns in line 5 the union of {am} and the maximum-size subset of Smj returned bythe recursive call RECURSIVE-ACTIVITY-SELECTOR(s, f, m, j). Alternatively, the loopmay terminate because m ≥ j, in which case we have examined all activities whose finishtimes are before that of aj without finding one that is compatible with ai. In this case, Sij = Ø,and so the procedure returns Ø in line 6.Figure 16.1: The operation of RECURSIVE-ACTIVITY-SELECTOR on the 11 activitiesgiven earlier. Activities considered in each recursive call appear between horizontal lines. Thefictitious activity a0 finishes at time 0, and in the initial call, RECURSIVE-ACTIVITYSELECTOR(s, f, 0, 12), activity a1 is selected. In each recursive call, the activities that havealready been selected are shaded, and the activity shown in white is being considered.