Distributed Algorithms. Nancy A. Lynch (1993) (811416), страница 39
Текст из файла (страница 39)
This yields an innite execution that is fair to all processesand that locks out, for example, pj .The key idea to remember in the above proof is the construction of bad fair executionsby splicing together fair executions. Also, note the apparent paradox: there is a nontrivialinherent cost to lockout-free mutual exclusion, even though we are assuming fair exclusiveaccess to a shared variable, which seems very close.16.1.2 Other Resource Allocation ProblemsMutual exclusion is only one kind of resource allocation problem, modeling access to a singleshared resource.
In this section we generalize the problem. There are two ways to describethe generalizations. The rst is exclusion problems, and the second is resource problems.We start with the new denitions.DenitionsIn the rst variant, we describe exclusion problems. Formally, we describe these problemsby a collection E of \conicting sets of processes" that are not allowed to coexist in C .
Ecould be any collection of sets of processes that is closed under containment, i.e., if S 2 Eand S 0 S then S 0 2 E . For example, the mutual exclusion problem is characterized byE = fS : jS j 2g :A common generalization is the k-exclusion problem, modeling k shared resources, whereat most k processes may be in the critical region at any given time. In our language, theproblem is dened byE = fS : jS j k + 1g :192Clearly, we can be more arbitrary, e.g., say E is dened to contain f1 2g f1 3g f2 4g f3 4gand other containing subsets.
But note that in this case, 1 doesn't exclude 4, and 2 doesn'texclude 3.This leads us to the second variant of the denition, in terms of resources. In this case,we have a set of explicit resources in the problem statement, and for each pi , we give a listof requirements, in the form of a monotone Boolean formula, e.g., (R1 ^ R2) _ (R3 ^ R4),where the Ri's are resources. Intuitively, the meaning of the example is that pi \needs"either R1 or R2, and either R3 or R4 in order to go critical. More formally, a set of resourcesis acceptable for a process to go critical i the setting all the corresponding variables to trueyields a satisfying assignment for the formula.Let's consider another simple example to illustrate this style of formulation.
Suppose p1wants resources R1 and R4, p2 wants R2 and R3, p3 wants R3 and R4, and p4 wants R4 andR1. In this example, the corresponding formulae have only a single clause (i.e, there are nooptions here).Note that any resource problem gives rise to an exclusion problem, where the combinations excluded by the formulas cannot be simultaneously satised. E.g., for the exampleabove, we get an equivalent formulation byE= fS : S f1 2g f2 1g f2 3g f3 4g f4 1ggIt is also true that every exclusion condition can be expressed as a resource problem, andthus these formulations are equivalent.For stating a general problem to be solved in concurrent systems, we again assume thecyclic behavior of the requests as before.
The mutual exclusion condition is now replaced bya general exclusion requirement. The denitions of progress and lockout-freedom remain thesame as before. We actually want to require something more, since there are some trivialsolutions that satisfy the requirements listed so far: any solution to mutual exclusion is afortiori a solution to more general exclusion. But there's something wrong with this | wewant to insist that more concurrency is somehow possible. We don't know what is the bestway to express this intuitive notion.
We give here a slightly stronger condition in terms ofthe resource formulation as follows.Consider any reachable state in which process i is in T , and every process thathas any of the same-named resources in its resource requirement formula is inR. Then if all the conicting processes stay in R and i continues to take steps,eventually i proceeds to C . (Note: We allow for other processes to halt.)Dijkstra had originally dened a special case, called the Dining Philosophers problem,described below as a resource problem.193p0F0F4p4p1F3F1p3F2p2Figure 16.1: The dining philosophers problem for n = 5.In the traditional scenario (see Figure 16.1), we have ve (say) philosophers at a table,usually thinking (i.e., in R).
From time to time, each philosopher wants to eat (i.e., to enterC ), which requires two forks (representing the resources) we assume that each philosophercan only take the forks adjacent to him (or her). Thus, the formula for pi is Fi;1 ^ Fi(counting mod 5). We can represent this as an exclusion problem, namely:= fS : S fFi Fi+1g for some 0 i < 5g :We assume that there is a shared variable associated with each fork, and the access modelfor the variables is read-modify-write.
If the processes are all identical and they don't usetheir indices, then we have the same kind of symmetry problem we saw in electing a leaderin a ring, which cannot be solved. To be concrete, suppose they all try to pick up their leftfork rst, and then their right. This satises the exclusion condition, but it can deadlock: ifall wake up at the same time and grab their left forks, then no one can proceed (this is worsethan a deadlocking execution | the system actually wedges).
Thus, we must use some kindof symmetry breaking technique, and here we make the assumption that the process indices(i.e., locations) are known to the processes.Dijkstra's original solution requires simultaneous locking of the two adjacent forks, acentralized shared read-modify-write variable, and has no fairness. We'll see a better solutionbelow.EThe Left-Right AlgorithmThis algorithm appeared in Burns paper, but it is generally considered as \folklore".
Thisalgorithm ensures exclusion, deadlock-freedom (in the stronger version stated above), andlockout-freedom. Also, it has a good time bound, independent of the size of the ring.This is a simple style of algorithm based on running around and collecting the forks, oneat a time. We have to be careful about the order in which this is done. For instance, if the194order is arbitrary, then we risk deadlock as above.
Even if there is no danger of deadlock, wemight be doing very poorly in terms of time performance. E.g., if everyone gets their forksin numerical order of forks, then although they don't deadlock, this might result in a longwaiting chain. To see this, consider the following scenario. In Figure 16.1 above, supposethat p4 gets both forks, then p3 gets its right fork, then waits for its left, p2 gets its rightfork, then waits for its left, p1 gets its right fork, then waits for its left, and p0 waits for itsright fork.
(See Figure 16.2 for nal conguration).p0F0F4p4p1F3F1p3F2p2Figure 16.2: A conguration with long waiting chain. Arrows indicate possession of forks.In the last state we have a chain where p0 waits for p1, which waits for p2, which waitsfor p3 , which waits for p4. The processes in the waiting chain can go into the critical regiononly sequentially. In general, this means that time can be at least linear in the size of thering.In contrast, the Burns algorithm has a constant bound, independent of the size of thering.
The algorithm accomplishes this by avoiding long waiting chains. We describe thealgorithm here for n even. The variant for odd n is similar, and is left as an exercise. Thealgorithm is based on the following simple rule: even numbered processes pick up their leftforks rst, and odd numbered processes get their right forks rst. Thus, the asymmetryhere is the knowledge of the parity.
Variables correspond to the forks, as before. Now, eachvariable contains a queue of processes waiting for the fork, of length at most 2. The rstprocess on the queue is assumed to own the fork. The code is given in Figure 16.3.Properties. Mutual exclusion is straightforward. Deadlock-freedom follows from lockout-freedom, which in turn follows from a time bound, which we prove now.
Assume as usualthat the step time at most s, and that the critical region time at most c. Dene T (n) tobe the worst time from entry to trying region until entry to critical region (in an n-processring). Our goal is to bound T (n). As an auxiliary quantity, we dene S (n) to be the worsttime from when a process has its rst fork until entry to critical region.195tryEect: pc look-leftilook-leftPrecondition: pc = look-leftEect: if i is not on x :queue then add i to endelse if i is rst on x :queue then pc look-rightiilook-rightPrecondition: pc = look-rightEect: if i is not on x 1:queue then add i to endelse if i is rst on x 1:queue then pc before-Ci;i;critPrecondition: pc = before-CEect: pc CiexitEect: return ; set fL Rgpc returnireturn-leftPrecondition: pc = returnL 2 return-setEect: return-set return-set ; fLgremove i from x :queueireturn-rightPrecondition: pc = returnR 2 return-setEect: return-set return-set ; fRgremove i from x 1:queuei;remPrecondition: pc = returnreturn-set = Eect: pc remiFigure 16.3: Left-right dining philosophers algorithm: code for process pi , i even.196Let us start by bounding T (n) in terms of S (n): pi tests its rst fork within time s ofwhen it enters T .
When pi rst tries, either it gets its rst fork immediately or not. If so,then S (n) time later, it goes critical. If not, then the other process has the rst fork. Atworst, that other process just got the fork, so it takes at most S (n) + c + 2s until it gets toC , nishes C , and returns both forks. Then since pi recorded its index in the queue for itsrst fork, within additional time s, pi will re-test the rst fork and succeed in getting it, andthen in additional S (n) time pi reaches C . This analysis says thatT (n) s + max fS (n) S (n) + c + 2s + s + S (n)g = 4s + c + 2S (n) :(16.4)We now bound S (n): pi tests its second fork within time s of when it gets the rst fork.Again we need to consider two cases.